Test Report: KVM_Linux_crio 20384

                    
                      1ec4157795cc89d548b96d897d53d69581daf40e:2025-04-14:39134
                    
                

Test fail (20/327)

x
+
TestAddons/parallel/Ingress (155.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-809953 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-809953 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-809953 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e74bcb1d-5b31-4ba9-b1d1-c80d3be8f404] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e74bcb1d-5b31-4ba9-b1d1-c80d3be8f404] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004393132s
I0414 12:22:28.155573 1175746 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-809953 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.318989555s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-809953 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-809953 -n addons-809953
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-809953 logs -n 25: (1.307301716s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-415624                                                                     | download-only-415624 | jenkins | v1.35.0 | 14 Apr 25 12:19 UTC | 14 Apr 25 12:19 UTC |
	| delete  | -p download-only-069523                                                                     | download-only-069523 | jenkins | v1.35.0 | 14 Apr 25 12:19 UTC | 14 Apr 25 12:19 UTC |
	| delete  | -p download-only-415624                                                                     | download-only-415624 | jenkins | v1.35.0 | 14 Apr 25 12:19 UTC | 14 Apr 25 12:19 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-104825 | jenkins | v1.35.0 | 14 Apr 25 12:19 UTC |                     |
	|         | binary-mirror-104825                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45431                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-104825                                                                     | binary-mirror-104825 | jenkins | v1.35.0 | 14 Apr 25 12:19 UTC | 14 Apr 25 12:19 UTC |
	| addons  | disable dashboard -p                                                                        | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:19 UTC |                     |
	|         | addons-809953                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:19 UTC |                     |
	|         | addons-809953                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-809953 --wait=true                                                                | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:19 UTC | 14 Apr 25 12:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-809953 addons disable                                                                | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:21 UTC | 14 Apr 25 12:21 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-809953 addons disable                                                                | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:22 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:22 UTC |
	|         | -p addons-809953                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-809953 addons                                                                        | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:22 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-809953 addons                                                                        | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:22 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-809953 addons disable                                                                | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:22 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-809953 addons                                                                        | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:22 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-809953 ip                                                                            | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:22 UTC |
	| addons  | addons-809953 addons disable                                                                | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:22 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-809953 ssh curl -s                                                                   | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-809953 addons disable                                                                | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:22 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-809953 ssh cat                                                                       | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:22 UTC |
	|         | /opt/local-path-provisioner/pvc-d24cdd85-0538-482e-b012-7f14c849e8a6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-809953 addons disable                                                                | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:23 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-809953 addons                                                                        | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:22 UTC | 14 Apr 25 12:22 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-809953 addons                                                                        | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:23 UTC | 14 Apr 25 12:23 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-809953 addons                                                                        | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:23 UTC | 14 Apr 25 12:23 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-809953 ip                                                                            | addons-809953        | jenkins | v1.35.0 | 14 Apr 25 12:24 UTC | 14 Apr 25 12:24 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:19:29
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:19:29.040782 1176576 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:19:29.040947 1176576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:19:29.040961 1176576 out.go:358] Setting ErrFile to fd 2...
	I0414 12:19:29.040966 1176576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:19:29.041172 1176576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 12:19:29.041878 1176576 out.go:352] Setting JSON to false
	I0414 12:19:29.043055 1176576 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14516,"bootTime":1744618653,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:19:29.043199 1176576 start.go:139] virtualization: kvm guest
	I0414 12:19:29.045547 1176576 out.go:177] * [addons-809953] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:19:29.047585 1176576 notify.go:220] Checking for updates...
	I0414 12:19:29.047718 1176576 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 12:19:29.049707 1176576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:19:29.051308 1176576 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 12:19:29.053048 1176576 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 12:19:29.054759 1176576 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:19:29.056412 1176576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:19:29.058190 1176576 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:19:29.096888 1176576 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 12:19:29.098435 1176576 start.go:297] selected driver: kvm2
	I0414 12:19:29.098468 1176576 start.go:901] validating driver "kvm2" against <nil>
	I0414 12:19:29.098488 1176576 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:19:29.099406 1176576 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:19:29.099547 1176576 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20384-1167927/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:19:29.118742 1176576 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:19:29.118829 1176576 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 12:19:29.119107 1176576 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 12:19:29.119162 1176576 cni.go:84] Creating CNI manager for ""
	I0414 12:19:29.119222 1176576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:19:29.119234 1176576 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 12:19:29.119300 1176576 start.go:340] cluster config:
	{Name:addons-809953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-809953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:19:29.119433 1176576 iso.go:125] acquiring lock: {Name:mk8f809cb461c81c5ffa481f4cedc4ad92252720 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:19:29.121573 1176576 out.go:177] * Starting "addons-809953" primary control-plane node in "addons-809953" cluster
	I0414 12:19:29.123173 1176576 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:19:29.123246 1176576 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 12:19:29.123257 1176576 cache.go:56] Caching tarball of preloaded images
	I0414 12:19:29.123354 1176576 preload.go:172] Found /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 12:19:29.123367 1176576 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 12:19:29.123742 1176576 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/config.json ...
	I0414 12:19:29.123774 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/config.json: {Name:mk5addf62c530051d661b4e9081cde2463e2be82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:19:29.123938 1176576 start.go:360] acquireMachinesLock for addons-809953: {Name:mk1c744e856a4885e6214d755048926590b4b12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 12:19:29.123987 1176576 start.go:364] duration metric: took 35.796µs to acquireMachinesLock for "addons-809953"
	I0414 12:19:29.124037 1176576 start.go:93] Provisioning new machine with config: &{Name:addons-809953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-809953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:19:29.124126 1176576 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 12:19:29.126024 1176576 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0414 12:19:29.126195 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:19:29.126238 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:19:29.142186 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0414 12:19:29.142836 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:19:29.143476 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:19:29.143504 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:19:29.143936 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:19:29.144186 1176576 main.go:141] libmachine: (addons-809953) Calling .GetMachineName
	I0414 12:19:29.144393 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:19:29.144639 1176576 start.go:159] libmachine.API.Create for "addons-809953" (driver="kvm2")
	I0414 12:19:29.144709 1176576 client.go:168] LocalClient.Create starting
	I0414 12:19:29.144773 1176576 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem
	I0414 12:19:29.260488 1176576 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem
	I0414 12:19:29.547553 1176576 main.go:141] libmachine: Running pre-create checks...
	I0414 12:19:29.547577 1176576 main.go:141] libmachine: (addons-809953) Calling .PreCreateCheck
	I0414 12:19:29.548201 1176576 main.go:141] libmachine: (addons-809953) Calling .GetConfigRaw
	I0414 12:19:29.548702 1176576 main.go:141] libmachine: Creating machine...
	I0414 12:19:29.548721 1176576 main.go:141] libmachine: (addons-809953) Calling .Create
	I0414 12:19:29.548899 1176576 main.go:141] libmachine: (addons-809953) creating KVM machine...
	I0414 12:19:29.548924 1176576 main.go:141] libmachine: (addons-809953) creating network...
	I0414 12:19:29.550794 1176576 main.go:141] libmachine: (addons-809953) DBG | found existing default KVM network
	I0414 12:19:29.551863 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:29.551578 1176599 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000131e0}
	I0414 12:19:29.551889 1176576 main.go:141] libmachine: (addons-809953) DBG | created network xml: 
	I0414 12:19:29.551899 1176576 main.go:141] libmachine: (addons-809953) DBG | <network>
	I0414 12:19:29.551905 1176576 main.go:141] libmachine: (addons-809953) DBG |   <name>mk-addons-809953</name>
	I0414 12:19:29.551913 1176576 main.go:141] libmachine: (addons-809953) DBG |   <dns enable='no'/>
	I0414 12:19:29.551919 1176576 main.go:141] libmachine: (addons-809953) DBG |   
	I0414 12:19:29.551930 1176576 main.go:141] libmachine: (addons-809953) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 12:19:29.551960 1176576 main.go:141] libmachine: (addons-809953) DBG |     <dhcp>
	I0414 12:19:29.552010 1176576 main.go:141] libmachine: (addons-809953) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 12:19:29.552030 1176576 main.go:141] libmachine: (addons-809953) DBG |     </dhcp>
	I0414 12:19:29.552038 1176576 main.go:141] libmachine: (addons-809953) DBG |   </ip>
	I0414 12:19:29.552043 1176576 main.go:141] libmachine: (addons-809953) DBG |   
	I0414 12:19:29.552049 1176576 main.go:141] libmachine: (addons-809953) DBG | </network>
	I0414 12:19:29.552061 1176576 main.go:141] libmachine: (addons-809953) DBG | 
	I0414 12:19:29.558527 1176576 main.go:141] libmachine: (addons-809953) DBG | trying to create private KVM network mk-addons-809953 192.168.39.0/24...
	I0414 12:19:29.636088 1176576 main.go:141] libmachine: (addons-809953) DBG | private KVM network mk-addons-809953 192.168.39.0/24 created
	I0414 12:19:29.636153 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:29.635997 1176599 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 12:19:29.636171 1176576 main.go:141] libmachine: (addons-809953) setting up store path in /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953 ...
	I0414 12:19:29.636184 1176576 main.go:141] libmachine: (addons-809953) building disk image from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 12:19:29.636198 1176576 main.go:141] libmachine: (addons-809953) Downloading /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 12:19:29.951541 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:29.951378 1176599 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa...
	I0414 12:19:30.181083 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:30.180876 1176599 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/addons-809953.rawdisk...
	I0414 12:19:30.181120 1176576 main.go:141] libmachine: (addons-809953) DBG | Writing magic tar header
	I0414 12:19:30.181129 1176576 main.go:141] libmachine: (addons-809953) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953 (perms=drwx------)
	I0414 12:19:30.181134 1176576 main.go:141] libmachine: (addons-809953) DBG | Writing SSH key tar header
	I0414 12:19:30.181146 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:30.181002 1176599 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953 ...
	I0414 12:19:30.181155 1176576 main.go:141] libmachine: (addons-809953) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953
	I0414 12:19:30.181163 1176576 main.go:141] libmachine: (addons-809953) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines
	I0414 12:19:30.181169 1176576 main.go:141] libmachine: (addons-809953) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 12:19:30.181178 1176576 main.go:141] libmachine: (addons-809953) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927
	I0414 12:19:30.181192 1176576 main.go:141] libmachine: (addons-809953) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines (perms=drwxr-xr-x)
	I0414 12:19:30.181202 1176576 main.go:141] libmachine: (addons-809953) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 12:19:30.181213 1176576 main.go:141] libmachine: (addons-809953) DBG | checking permissions on dir: /home/jenkins
	I0414 12:19:30.181220 1176576 main.go:141] libmachine: (addons-809953) DBG | checking permissions on dir: /home
	I0414 12:19:30.181230 1176576 main.go:141] libmachine: (addons-809953) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube (perms=drwxr-xr-x)
	I0414 12:19:30.181237 1176576 main.go:141] libmachine: (addons-809953) DBG | skipping /home - not owner
	I0414 12:19:30.181252 1176576 main.go:141] libmachine: (addons-809953) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927 (perms=drwxrwxr-x)
	I0414 12:19:30.181265 1176576 main.go:141] libmachine: (addons-809953) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 12:19:30.181276 1176576 main.go:141] libmachine: (addons-809953) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 12:19:30.181286 1176576 main.go:141] libmachine: (addons-809953) creating domain...
	I0414 12:19:30.182602 1176576 main.go:141] libmachine: (addons-809953) define libvirt domain using xml: 
	I0414 12:19:30.182653 1176576 main.go:141] libmachine: (addons-809953) <domain type='kvm'>
	I0414 12:19:30.182668 1176576 main.go:141] libmachine: (addons-809953)   <name>addons-809953</name>
	I0414 12:19:30.182674 1176576 main.go:141] libmachine: (addons-809953)   <memory unit='MiB'>4000</memory>
	I0414 12:19:30.182679 1176576 main.go:141] libmachine: (addons-809953)   <vcpu>2</vcpu>
	I0414 12:19:30.182683 1176576 main.go:141] libmachine: (addons-809953)   <features>
	I0414 12:19:30.182688 1176576 main.go:141] libmachine: (addons-809953)     <acpi/>
	I0414 12:19:30.182691 1176576 main.go:141] libmachine: (addons-809953)     <apic/>
	I0414 12:19:30.182696 1176576 main.go:141] libmachine: (addons-809953)     <pae/>
	I0414 12:19:30.182699 1176576 main.go:141] libmachine: (addons-809953)     
	I0414 12:19:30.182704 1176576 main.go:141] libmachine: (addons-809953)   </features>
	I0414 12:19:30.182711 1176576 main.go:141] libmachine: (addons-809953)   <cpu mode='host-passthrough'>
	I0414 12:19:30.182717 1176576 main.go:141] libmachine: (addons-809953)   
	I0414 12:19:30.182721 1176576 main.go:141] libmachine: (addons-809953)   </cpu>
	I0414 12:19:30.182725 1176576 main.go:141] libmachine: (addons-809953)   <os>
	I0414 12:19:30.182730 1176576 main.go:141] libmachine: (addons-809953)     <type>hvm</type>
	I0414 12:19:30.182734 1176576 main.go:141] libmachine: (addons-809953)     <boot dev='cdrom'/>
	I0414 12:19:30.182739 1176576 main.go:141] libmachine: (addons-809953)     <boot dev='hd'/>
	I0414 12:19:30.182744 1176576 main.go:141] libmachine: (addons-809953)     <bootmenu enable='no'/>
	I0414 12:19:30.182754 1176576 main.go:141] libmachine: (addons-809953)   </os>
	I0414 12:19:30.182759 1176576 main.go:141] libmachine: (addons-809953)   <devices>
	I0414 12:19:30.182768 1176576 main.go:141] libmachine: (addons-809953)     <disk type='file' device='cdrom'>
	I0414 12:19:30.182823 1176576 main.go:141] libmachine: (addons-809953)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/boot2docker.iso'/>
	I0414 12:19:30.182849 1176576 main.go:141] libmachine: (addons-809953)       <target dev='hdc' bus='scsi'/>
	I0414 12:19:30.182860 1176576 main.go:141] libmachine: (addons-809953)       <readonly/>
	I0414 12:19:30.182869 1176576 main.go:141] libmachine: (addons-809953)     </disk>
	I0414 12:19:30.182881 1176576 main.go:141] libmachine: (addons-809953)     <disk type='file' device='disk'>
	I0414 12:19:30.182892 1176576 main.go:141] libmachine: (addons-809953)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 12:19:30.182907 1176576 main.go:141] libmachine: (addons-809953)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/addons-809953.rawdisk'/>
	I0414 12:19:30.182924 1176576 main.go:141] libmachine: (addons-809953)       <target dev='hda' bus='virtio'/>
	I0414 12:19:30.182935 1176576 main.go:141] libmachine: (addons-809953)     </disk>
	I0414 12:19:30.182944 1176576 main.go:141] libmachine: (addons-809953)     <interface type='network'>
	I0414 12:19:30.182956 1176576 main.go:141] libmachine: (addons-809953)       <source network='mk-addons-809953'/>
	I0414 12:19:30.182967 1176576 main.go:141] libmachine: (addons-809953)       <model type='virtio'/>
	I0414 12:19:30.182983 1176576 main.go:141] libmachine: (addons-809953)     </interface>
	I0414 12:19:30.182998 1176576 main.go:141] libmachine: (addons-809953)     <interface type='network'>
	I0414 12:19:30.183011 1176576 main.go:141] libmachine: (addons-809953)       <source network='default'/>
	I0414 12:19:30.183022 1176576 main.go:141] libmachine: (addons-809953)       <model type='virtio'/>
	I0414 12:19:30.183034 1176576 main.go:141] libmachine: (addons-809953)     </interface>
	I0414 12:19:30.183044 1176576 main.go:141] libmachine: (addons-809953)     <serial type='pty'>
	I0414 12:19:30.183054 1176576 main.go:141] libmachine: (addons-809953)       <target port='0'/>
	I0414 12:19:30.183069 1176576 main.go:141] libmachine: (addons-809953)     </serial>
	I0414 12:19:30.183092 1176576 main.go:141] libmachine: (addons-809953)     <console type='pty'>
	I0414 12:19:30.183103 1176576 main.go:141] libmachine: (addons-809953)       <target type='serial' port='0'/>
	I0414 12:19:30.183111 1176576 main.go:141] libmachine: (addons-809953)     </console>
	I0414 12:19:30.183119 1176576 main.go:141] libmachine: (addons-809953)     <rng model='virtio'>
	I0414 12:19:30.183159 1176576 main.go:141] libmachine: (addons-809953)       <backend model='random'>/dev/random</backend>
	I0414 12:19:30.183174 1176576 main.go:141] libmachine: (addons-809953)     </rng>
	I0414 12:19:30.183183 1176576 main.go:141] libmachine: (addons-809953)     
	I0414 12:19:30.183198 1176576 main.go:141] libmachine: (addons-809953)     
	I0414 12:19:30.183211 1176576 main.go:141] libmachine: (addons-809953)   </devices>
	I0414 12:19:30.183220 1176576 main.go:141] libmachine: (addons-809953) </domain>
	I0414 12:19:30.183232 1176576 main.go:141] libmachine: (addons-809953) 
	I0414 12:19:30.190963 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:fd:81:fc in network default
	I0414 12:19:30.191737 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:30.191776 1176576 main.go:141] libmachine: (addons-809953) starting domain...
	I0414 12:19:30.191790 1176576 main.go:141] libmachine: (addons-809953) ensuring networks are active...
	I0414 12:19:30.192657 1176576 main.go:141] libmachine: (addons-809953) Ensuring network default is active
	I0414 12:19:30.193360 1176576 main.go:141] libmachine: (addons-809953) Ensuring network mk-addons-809953 is active
	I0414 12:19:30.194262 1176576 main.go:141] libmachine: (addons-809953) getting domain XML...
	I0414 12:19:30.195264 1176576 main.go:141] libmachine: (addons-809953) creating domain...
	I0414 12:19:31.938650 1176576 main.go:141] libmachine: (addons-809953) waiting for IP...
	I0414 12:19:31.939470 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:31.940111 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:31.940191 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:31.940129 1176599 retry.go:31] will retry after 267.467957ms: waiting for domain to come up
	I0414 12:19:32.210016 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:32.210685 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:32.210798 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:32.210620 1176599 retry.go:31] will retry after 237.488751ms: waiting for domain to come up
	I0414 12:19:32.450325 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:32.450890 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:32.450916 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:32.450866 1176599 retry.go:31] will retry after 389.766265ms: waiting for domain to come up
	I0414 12:19:32.842666 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:32.843279 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:32.843309 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:32.843223 1176599 retry.go:31] will retry after 390.902257ms: waiting for domain to come up
	I0414 12:19:33.235841 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:33.236327 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:33.236357 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:33.236287 1176599 retry.go:31] will retry after 626.274827ms: waiting for domain to come up
	I0414 12:19:33.864411 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:33.865052 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:33.865076 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:33.865023 1176599 retry.go:31] will retry after 750.801695ms: waiting for domain to come up
	I0414 12:19:34.618239 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:34.618863 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:34.618905 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:34.618842 1176599 retry.go:31] will retry after 1.000873923s: waiting for domain to come up
	I0414 12:19:35.622387 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:35.622962 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:35.623000 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:35.622910 1176599 retry.go:31] will retry after 1.470941844s: waiting for domain to come up
	I0414 12:19:37.096026 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:37.096590 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:37.096618 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:37.096524 1176599 retry.go:31] will retry after 1.250803701s: waiting for domain to come up
	I0414 12:19:38.348963 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:38.349530 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:38.349557 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:38.349481 1176599 retry.go:31] will retry after 1.893288715s: waiting for domain to come up
	I0414 12:19:40.244661 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:40.245228 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:40.245259 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:40.245170 1176599 retry.go:31] will retry after 2.47104527s: waiting for domain to come up
	I0414 12:19:42.720025 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:42.720472 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:42.720532 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:42.720467 1176599 retry.go:31] will retry after 3.258123796s: waiting for domain to come up
	I0414 12:19:45.980715 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:45.981306 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:45.981335 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:45.981254 1176599 retry.go:31] will retry after 3.672588392s: waiting for domain to come up
	I0414 12:19:49.656033 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:49.656534 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find current IP address of domain addons-809953 in network mk-addons-809953
	I0414 12:19:49.656566 1176576 main.go:141] libmachine: (addons-809953) DBG | I0414 12:19:49.656457 1176599 retry.go:31] will retry after 4.725292168s: waiting for domain to come up
	I0414 12:19:54.383131 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.383802 1176576 main.go:141] libmachine: (addons-809953) found domain IP: 192.168.39.2
	I0414 12:19:54.383834 1176576 main.go:141] libmachine: (addons-809953) reserving static IP address...
	I0414 12:19:54.383849 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has current primary IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.384368 1176576 main.go:141] libmachine: (addons-809953) DBG | unable to find host DHCP lease matching {name: "addons-809953", mac: "52:54:00:22:9a:ae", ip: "192.168.39.2"} in network mk-addons-809953
	I0414 12:19:54.484107 1176576 main.go:141] libmachine: (addons-809953) reserved static IP address 192.168.39.2 for domain addons-809953
	I0414 12:19:54.484135 1176576 main.go:141] libmachine: (addons-809953) DBG | Getting to WaitForSSH function...
	I0414 12:19:54.484142 1176576 main.go:141] libmachine: (addons-809953) waiting for SSH...
	I0414 12:19:54.487188 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.487771 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:54.487812 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.488025 1176576 main.go:141] libmachine: (addons-809953) DBG | Using SSH client type: external
	I0414 12:19:54.488050 1176576 main.go:141] libmachine: (addons-809953) DBG | Using SSH private key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa (-rw-------)
	I0414 12:19:54.488100 1176576 main.go:141] libmachine: (addons-809953) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 12:19:54.488117 1176576 main.go:141] libmachine: (addons-809953) DBG | About to run SSH command:
	I0414 12:19:54.488129 1176576 main.go:141] libmachine: (addons-809953) DBG | exit 0
	I0414 12:19:54.616322 1176576 main.go:141] libmachine: (addons-809953) DBG | SSH cmd err, output: <nil>: 
	I0414 12:19:54.616680 1176576 main.go:141] libmachine: (addons-809953) KVM machine creation complete
	I0414 12:19:54.617119 1176576 main.go:141] libmachine: (addons-809953) Calling .GetConfigRaw
	I0414 12:19:54.617765 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:19:54.618037 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:19:54.618214 1176576 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 12:19:54.618232 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:19:54.619875 1176576 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 12:19:54.619906 1176576 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 12:19:54.619912 1176576 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 12:19:54.619923 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:19:54.622880 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.623279 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:54.623306 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.623638 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:19:54.624009 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:54.624356 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:54.624664 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:19:54.624930 1176576 main.go:141] libmachine: Using SSH client type: native
	I0414 12:19:54.625168 1176576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 12:19:54.625185 1176576 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 12:19:54.735153 1176576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 12:19:54.735189 1176576 main.go:141] libmachine: Detecting the provisioner...
	I0414 12:19:54.735202 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:19:54.738357 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.738830 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:54.738854 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.739036 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:19:54.739263 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:54.739461 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:54.739712 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:19:54.739900 1176576 main.go:141] libmachine: Using SSH client type: native
	I0414 12:19:54.740131 1176576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 12:19:54.740142 1176576 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 12:19:54.848467 1176576 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 12:19:54.848542 1176576 main.go:141] libmachine: found compatible host: buildroot
	I0414 12:19:54.848555 1176576 main.go:141] libmachine: Provisioning with buildroot...
	I0414 12:19:54.848568 1176576 main.go:141] libmachine: (addons-809953) Calling .GetMachineName
	I0414 12:19:54.848865 1176576 buildroot.go:166] provisioning hostname "addons-809953"
	I0414 12:19:54.848900 1176576 main.go:141] libmachine: (addons-809953) Calling .GetMachineName
	I0414 12:19:54.849135 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:19:54.852365 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.852882 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:54.852917 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.853235 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:19:54.853559 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:54.853793 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:54.854060 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:19:54.854364 1176576 main.go:141] libmachine: Using SSH client type: native
	I0414 12:19:54.854578 1176576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 12:19:54.854591 1176576 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-809953 && echo "addons-809953" | sudo tee /etc/hostname
	I0414 12:19:54.983301 1176576 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-809953
	
	I0414 12:19:54.983341 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:19:54.988079 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.988608 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:54.988637 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:54.988998 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:19:54.989339 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:54.989575 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:54.989747 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:19:54.989938 1176576 main.go:141] libmachine: Using SSH client type: native
	I0414 12:19:54.990168 1176576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 12:19:54.990185 1176576 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-809953' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-809953/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-809953' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 12:19:55.109054 1176576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 12:19:55.109097 1176576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20384-1167927/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-1167927/.minikube}
	I0414 12:19:55.109124 1176576 buildroot.go:174] setting up certificates
	I0414 12:19:55.109139 1176576 provision.go:84] configureAuth start
	I0414 12:19:55.109149 1176576 main.go:141] libmachine: (addons-809953) Calling .GetMachineName
	I0414 12:19:55.109557 1176576 main.go:141] libmachine: (addons-809953) Calling .GetIP
	I0414 12:19:55.113953 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:55.114374 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:55.114411 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:55.114710 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:19:55.118666 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:55.119410 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:55.119442 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:55.119716 1176576 provision.go:143] copyHostCerts
	I0414 12:19:55.119843 1176576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem (1078 bytes)
	I0414 12:19:55.120152 1176576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem (1123 bytes)
	I0414 12:19:55.120319 1176576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem (1675 bytes)
	I0414 12:19:55.120409 1176576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem org=jenkins.addons-809953 san=[127.0.0.1 192.168.39.2 addons-809953 localhost minikube]
	I0414 12:19:55.621356 1176576 provision.go:177] copyRemoteCerts
	I0414 12:19:55.621427 1176576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 12:19:55.621457 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:19:55.624626 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:55.625137 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:55.625176 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:55.625306 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:19:55.625545 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:55.625761 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:19:55.625939 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:19:55.709613 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 12:19:55.737541 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 12:19:55.765475 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 12:19:55.790164 1176576 provision.go:87] duration metric: took 681.007011ms to configureAuth
	I0414 12:19:55.790196 1176576 buildroot.go:189] setting minikube options for container-runtime
	I0414 12:19:55.790394 1176576 config.go:182] Loaded profile config "addons-809953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:19:55.790473 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:19:55.793862 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:55.794408 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:55.794454 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:55.794645 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:19:55.794939 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:55.795162 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:55.795370 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:19:55.795578 1176576 main.go:141] libmachine: Using SSH client type: native
	I0414 12:19:55.795844 1176576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 12:19:55.795872 1176576 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 12:19:56.037632 1176576 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 12:19:56.037671 1176576 main.go:141] libmachine: Checking connection to Docker...
	I0414 12:19:56.037682 1176576 main.go:141] libmachine: (addons-809953) Calling .GetURL
	I0414 12:19:56.039398 1176576 main.go:141] libmachine: (addons-809953) DBG | using libvirt version 6000000
	I0414 12:19:56.041900 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.042219 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:56.042257 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.042416 1176576 main.go:141] libmachine: Docker is up and running!
	I0414 12:19:56.042435 1176576 main.go:141] libmachine: Reticulating splines...
	I0414 12:19:56.042444 1176576 client.go:171] duration metric: took 26.897721147s to LocalClient.Create
	I0414 12:19:56.042485 1176576 start.go:167] duration metric: took 26.89789398s to libmachine.API.Create "addons-809953"
	I0414 12:19:56.042532 1176576 start.go:293] postStartSetup for "addons-809953" (driver="kvm2")
	I0414 12:19:56.042542 1176576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 12:19:56.042561 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:19:56.042913 1176576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 12:19:56.042950 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:19:56.045933 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.046286 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:56.046325 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.046518 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:19:56.046841 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:56.047065 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:19:56.047220 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:19:56.134727 1176576 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 12:19:56.139814 1176576 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 12:19:56.139867 1176576 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/addons for local assets ...
	I0414 12:19:56.139979 1176576 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/files for local assets ...
	I0414 12:19:56.140052 1176576 start.go:296] duration metric: took 97.512576ms for postStartSetup
	I0414 12:19:56.140108 1176576 main.go:141] libmachine: (addons-809953) Calling .GetConfigRaw
	I0414 12:19:56.140922 1176576 main.go:141] libmachine: (addons-809953) Calling .GetIP
	I0414 12:19:56.144680 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.145271 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:56.145312 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.145732 1176576 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/config.json ...
	I0414 12:19:56.145995 1176576 start.go:128] duration metric: took 27.021853304s to createHost
	I0414 12:19:56.146027 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:19:56.148834 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.149260 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:56.149306 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.149569 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:19:56.149836 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:56.150070 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:56.150203 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:19:56.150463 1176576 main.go:141] libmachine: Using SSH client type: native
	I0414 12:19:56.150734 1176576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0414 12:19:56.150748 1176576 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 12:19:56.261107 1176576 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744633196.237899648
	
	I0414 12:19:56.261142 1176576 fix.go:216] guest clock: 1744633196.237899648
	I0414 12:19:56.261151 1176576 fix.go:229] Guest: 2025-04-14 12:19:56.237899648 +0000 UTC Remote: 2025-04-14 12:19:56.146010261 +0000 UTC m=+27.149799568 (delta=91.889387ms)
	I0414 12:19:56.261195 1176576 fix.go:200] guest clock delta is within tolerance: 91.889387ms
	I0414 12:19:56.261203 1176576 start.go:83] releasing machines lock for "addons-809953", held for 27.137176335s
	I0414 12:19:56.261239 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:19:56.261678 1176576 main.go:141] libmachine: (addons-809953) Calling .GetIP
	I0414 12:19:56.264847 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.265352 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:56.265381 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.265640 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:19:56.266396 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:19:56.266632 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:19:56.266737 1176576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 12:19:56.266804 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:19:56.266901 1176576 ssh_runner.go:195] Run: cat /version.json
	I0414 12:19:56.266931 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:19:56.269693 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.270018 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:56.270044 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.270065 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.270195 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:19:56.270388 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:56.270579 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:56.270640 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:56.270642 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:19:56.270769 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:19:56.270851 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:19:56.270903 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:19:56.271147 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:19:56.271358 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:19:56.348742 1176576 ssh_runner.go:195] Run: systemctl --version
	I0414 12:19:56.376178 1176576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 12:19:56.541660 1176576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 12:19:56.548046 1176576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 12:19:56.548142 1176576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 12:19:56.564434 1176576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 12:19:56.564465 1176576 start.go:495] detecting cgroup driver to use...
	I0414 12:19:56.564541 1176576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 12:19:56.583414 1176576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 12:19:56.598862 1176576 docker.go:217] disabling cri-docker service (if available) ...
	I0414 12:19:56.598948 1176576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 12:19:56.614729 1176576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 12:19:56.630464 1176576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 12:19:56.747889 1176576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 12:19:56.915681 1176576 docker.go:233] disabling docker service ...
	I0414 12:19:56.915772 1176576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 12:19:56.932731 1176576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 12:19:56.947301 1176576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 12:19:57.071321 1176576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 12:19:57.192769 1176576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 12:19:57.208427 1176576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 12:19:57.227079 1176576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 12:19:57.227163 1176576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:19:57.238168 1176576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 12:19:57.238262 1176576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:19:57.249961 1176576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:19:57.261783 1176576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:19:57.273733 1176576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 12:19:57.285424 1176576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:19:57.296281 1176576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:19:57.315316 1176576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 12:19:57.327319 1176576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 12:19:57.338318 1176576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 12:19:57.338405 1176576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 12:19:57.352625 1176576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 12:19:57.362931 1176576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:19:57.490499 1176576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 12:19:57.588228 1176576 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 12:19:57.588335 1176576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 12:19:57.593386 1176576 start.go:563] Will wait 60s for crictl version
	I0414 12:19:57.593480 1176576 ssh_runner.go:195] Run: which crictl
	I0414 12:19:57.597753 1176576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 12:19:57.640110 1176576 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 12:19:57.640219 1176576 ssh_runner.go:195] Run: crio --version
	I0414 12:19:57.667316 1176576 ssh_runner.go:195] Run: crio --version
	I0414 12:19:57.699548 1176576 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 12:19:57.701309 1176576 main.go:141] libmachine: (addons-809953) Calling .GetIP
	I0414 12:19:57.704776 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:57.705198 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:19:57.705241 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:19:57.705535 1176576 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 12:19:57.710261 1176576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:19:57.724250 1176576 kubeadm.go:883] updating cluster {Name:addons-809953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-809953 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 12:19:57.724389 1176576 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:19:57.724458 1176576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:19:57.762981 1176576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 12:19:57.763066 1176576 ssh_runner.go:195] Run: which lz4
	I0414 12:19:57.767730 1176576 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 12:19:57.772563 1176576 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 12:19:57.772616 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 12:19:59.124061 1176576 crio.go:462] duration metric: took 1.35638317s to copy over tarball
	I0414 12:19:59.124164 1176576 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 12:20:01.662014 1176576 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.537805508s)
	I0414 12:20:01.662051 1176576 crio.go:469] duration metric: took 2.537942497s to extract the tarball
	I0414 12:20:01.662061 1176576 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 12:20:01.702910 1176576 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 12:20:01.752554 1176576 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 12:20:01.752587 1176576 cache_images.go:84] Images are preloaded, skipping loading
	I0414 12:20:01.752596 1176576 kubeadm.go:934] updating node { 192.168.39.2 8443 v1.32.2 crio true true} ...
	I0414 12:20:01.752719 1176576 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-809953 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-809953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 12:20:01.752791 1176576 ssh_runner.go:195] Run: crio config
	I0414 12:20:01.807324 1176576 cni.go:84] Creating CNI manager for ""
	I0414 12:20:01.807377 1176576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:20:01.807396 1176576 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 12:20:01.807441 1176576 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-809953 NodeName:addons-809953 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 12:20:01.807600 1176576 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-809953"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 12:20:01.807718 1176576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 12:20:01.821344 1176576 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 12:20:01.821426 1176576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 12:20:01.834313 1176576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0414 12:20:01.855720 1176576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 12:20:01.874287 1176576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0414 12:20:01.895817 1176576 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I0414 12:20:01.900980 1176576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 12:20:01.916830 1176576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:20:02.053617 1176576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:20:02.074879 1176576 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953 for IP: 192.168.39.2
	I0414 12:20:02.074918 1176576 certs.go:194] generating shared ca certs ...
	I0414 12:20:02.074948 1176576 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:02.075172 1176576 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 12:20:02.296516 1176576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt ...
	I0414 12:20:02.296552 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt: {Name:mk72c83df81b3f60ba363d2af58ee789a40e3852 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:02.296753 1176576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key ...
	I0414 12:20:02.296770 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key: {Name:mk270d449741920125af04f77459346c91921fc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:02.296864 1176576 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 12:20:02.751556 1176576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt ...
	I0414 12:20:02.751596 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt: {Name:mk146ead054d9ed9f652182549e05b6ce15a1a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:02.751801 1176576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key ...
	I0414 12:20:02.751814 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key: {Name:mk1c108deae2ee41989126681fe778dfe7892e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:02.751883 1176576 certs.go:256] generating profile certs ...
	I0414 12:20:02.751960 1176576 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.key
	I0414 12:20:02.751975 1176576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt with IP's: []
	I0414 12:20:02.847388 1176576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt ...
	I0414 12:20:02.847429 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: {Name:mk74db94041a9576e825f66a1617e774d0160c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:02.847619 1176576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.key ...
	I0414 12:20:02.847631 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.key: {Name:mkc986a9fe7be85624b1d3e4f8c1df935ec5d8ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:02.847724 1176576 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.key.553a4e03
	I0414 12:20:02.847745 1176576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.crt.553a4e03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2]
	I0414 12:20:03.105374 1176576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.crt.553a4e03 ...
	I0414 12:20:03.105416 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.crt.553a4e03: {Name:mka6c717d988d947cd61e601faed58f2e388a336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:03.105612 1176576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.key.553a4e03 ...
	I0414 12:20:03.105634 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.key.553a4e03: {Name:mk527364f8b949905ff78b09eaa816d9e37cc5e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:03.105715 1176576 certs.go:381] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.crt.553a4e03 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.crt
	I0414 12:20:03.105787 1176576 certs.go:385] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.key.553a4e03 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.key
	I0414 12:20:03.105834 1176576 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/proxy-client.key
	I0414 12:20:03.105855 1176576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/proxy-client.crt with IP's: []
	I0414 12:20:03.409572 1176576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/proxy-client.crt ...
	I0414 12:20:03.409626 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/proxy-client.crt: {Name:mkc7d9612b600108bb52c8bc6453a2cf1f441066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:03.409827 1176576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/proxy-client.key ...
	I0414 12:20:03.409845 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/proxy-client.key: {Name:mke4456cd9deb2ab25df516618c4f5a5068214b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:03.410032 1176576 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 12:20:03.410078 1176576 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 12:20:03.410103 1176576 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 12:20:03.410127 1176576 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 12:20:03.410723 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 12:20:03.440625 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 12:20:03.487559 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 12:20:03.518255 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 12:20:03.545388 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 12:20:03.573455 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 12:20:03.600458 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 12:20:03.628528 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 12:20:03.653341 1176576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 12:20:03.680080 1176576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 12:20:03.698720 1176576 ssh_runner.go:195] Run: openssl version
	I0414 12:20:03.705359 1176576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 12:20:03.717317 1176576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:20:03.722085 1176576 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:20:03.722170 1176576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 12:20:03.727945 1176576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 12:20:03.738532 1176576 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 12:20:03.742967 1176576 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 12:20:03.743028 1176576 kubeadm.go:392] StartCluster: {Name:addons-809953 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-809953 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:20:03.743099 1176576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 12:20:03.743148 1176576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 12:20:03.780462 1176576 cri.go:89] found id: ""
	I0414 12:20:03.780552 1176576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 12:20:03.791025 1176576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 12:20:03.801252 1176576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 12:20:03.812256 1176576 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 12:20:03.812284 1176576 kubeadm.go:157] found existing configuration files:
	
	I0414 12:20:03.812349 1176576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 12:20:03.823038 1176576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 12:20:03.823115 1176576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 12:20:03.833862 1176576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 12:20:03.844729 1176576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 12:20:03.844803 1176576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 12:20:03.855701 1176576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 12:20:03.865649 1176576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 12:20:03.865738 1176576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 12:20:03.876304 1176576 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 12:20:03.886589 1176576 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 12:20:03.886721 1176576 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 12:20:03.898306 1176576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 12:20:03.950681 1176576 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 12:20:03.950774 1176576 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 12:20:04.055116 1176576 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 12:20:04.055250 1176576 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 12:20:04.055376 1176576 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 12:20:04.066696 1176576 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 12:20:04.069221 1176576 out.go:235]   - Generating certificates and keys ...
	I0414 12:20:04.069368 1176576 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 12:20:04.069444 1176576 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 12:20:04.414124 1176576 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 12:20:04.723051 1176576 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 12:20:05.059875 1176576 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 12:20:05.167006 1176576 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 12:20:05.360797 1176576 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 12:20:05.361010 1176576 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-809953 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0414 12:20:05.600723 1176576 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 12:20:05.600892 1176576 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-809953 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0414 12:20:05.803490 1176576 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 12:20:05.878095 1176576 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 12:20:06.034930 1176576 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 12:20:06.035033 1176576 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 12:20:06.155319 1176576 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 12:20:06.276676 1176576 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 12:20:06.381667 1176576 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 12:20:06.897934 1176576 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 12:20:06.965488 1176576 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 12:20:06.966168 1176576 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 12:20:06.968808 1176576 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 12:20:06.971244 1176576 out.go:235]   - Booting up control plane ...
	I0414 12:20:06.971426 1176576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 12:20:06.971536 1176576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 12:20:06.971626 1176576 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 12:20:06.988574 1176576 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 12:20:06.995321 1176576 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 12:20:06.995400 1176576 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 12:20:07.126646 1176576 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 12:20:07.126796 1176576 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 12:20:07.629733 1176576 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.640886ms
	I0414 12:20:07.629838 1176576 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 12:20:12.629294 1176576 kubeadm.go:310] [api-check] The API server is healthy after 5.002233546s
	I0414 12:20:12.641220 1176576 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 12:20:12.668378 1176576 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 12:20:12.719850 1176576 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 12:20:12.720092 1176576 kubeadm.go:310] [mark-control-plane] Marking the node addons-809953 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 12:20:12.735590 1176576 kubeadm.go:310] [bootstrap-token] Using token: v5h6gl.9e9h81xfx574tx95
	I0414 12:20:12.737308 1176576 out.go:235]   - Configuring RBAC rules ...
	I0414 12:20:12.737494 1176576 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 12:20:12.745482 1176576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 12:20:12.759632 1176576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 12:20:12.768968 1176576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 12:20:12.775769 1176576 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 12:20:12.783444 1176576 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 12:20:13.035166 1176576 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 12:20:13.475426 1176576 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 12:20:14.039878 1176576 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 12:20:14.039915 1176576 kubeadm.go:310] 
	I0414 12:20:14.040004 1176576 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 12:20:14.040011 1176576 kubeadm.go:310] 
	I0414 12:20:14.040149 1176576 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 12:20:14.040159 1176576 kubeadm.go:310] 
	I0414 12:20:14.040201 1176576 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 12:20:14.040287 1176576 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 12:20:14.040398 1176576 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 12:20:14.040447 1176576 kubeadm.go:310] 
	I0414 12:20:14.040545 1176576 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 12:20:14.040555 1176576 kubeadm.go:310] 
	I0414 12:20:14.040622 1176576 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 12:20:14.040632 1176576 kubeadm.go:310] 
	I0414 12:20:14.040699 1176576 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 12:20:14.040808 1176576 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 12:20:14.040902 1176576 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 12:20:14.040912 1176576 kubeadm.go:310] 
	I0414 12:20:14.041032 1176576 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 12:20:14.041197 1176576 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 12:20:14.041210 1176576 kubeadm.go:310] 
	I0414 12:20:14.041346 1176576 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v5h6gl.9e9h81xfx574tx95 \
	I0414 12:20:14.041455 1176576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5f2612fbe124e36b509339bfdb73251340c2c4faa099e85bd5300a27bddf90e9 \
	I0414 12:20:14.041506 1176576 kubeadm.go:310] 	--control-plane 
	I0414 12:20:14.041515 1176576 kubeadm.go:310] 
	I0414 12:20:14.041628 1176576 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 12:20:14.041638 1176576 kubeadm.go:310] 
	I0414 12:20:14.041788 1176576 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v5h6gl.9e9h81xfx574tx95 \
	I0414 12:20:14.041917 1176576 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5f2612fbe124e36b509339bfdb73251340c2c4faa099e85bd5300a27bddf90e9 
	I0414 12:20:14.042166 1176576 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 12:20:14.042349 1176576 cni.go:84] Creating CNI manager for ""
	I0414 12:20:14.042372 1176576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:20:14.044105 1176576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 12:20:14.046086 1176576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 12:20:14.057773 1176576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 12:20:14.077836 1176576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 12:20:14.077963 1176576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:20:14.078028 1176576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-809953 minikube.k8s.io/updated_at=2025_04_14T12_20_14_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=9d10b41d083ee2064b1b8c7e16503e13b1847696 minikube.k8s.io/name=addons-809953 minikube.k8s.io/primary=true
	I0414 12:20:14.093023 1176576 ops.go:34] apiserver oom_adj: -16
	I0414 12:20:14.252923 1176576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:20:14.753628 1176576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:20:15.253461 1176576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:20:15.753178 1176576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:20:16.253374 1176576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:20:16.753762 1176576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:20:17.253199 1176576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:20:17.753714 1176576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 12:20:17.850599 1176576 kubeadm.go:1113] duration metric: took 3.772728358s to wait for elevateKubeSystemPrivileges
	I0414 12:20:17.850690 1176576 kubeadm.go:394] duration metric: took 14.107631643s to StartCluster
	I0414 12:20:17.850738 1176576 settings.go:142] acquiring lock: {Name:mkc68e13b098b3e7461fc88804a0aed191118bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:17.850925 1176576 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 12:20:17.851569 1176576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/kubeconfig: {Name:mk5eb6c4765d4c70f1db00acbce88c0952cb579b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:20:17.851888 1176576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 12:20:17.851915 1176576 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 12:20:17.851995 1176576 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0414 12:20:17.852110 1176576 addons.go:69] Setting yakd=true in profile "addons-809953"
	I0414 12:20:17.852135 1176576 addons.go:238] Setting addon yakd=true in "addons-809953"
	I0414 12:20:17.852151 1176576 addons.go:69] Setting inspektor-gadget=true in profile "addons-809953"
	I0414 12:20:17.852171 1176576 config.go:182] Loaded profile config "addons-809953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:20:17.852194 1176576 addons.go:238] Setting addon inspektor-gadget=true in "addons-809953"
	I0414 12:20:17.852181 1176576 addons.go:69] Setting volumesnapshots=true in profile "addons-809953"
	I0414 12:20:17.852185 1176576 addons.go:69] Setting volcano=true in profile "addons-809953"
	I0414 12:20:17.852213 1176576 addons.go:238] Setting addon volumesnapshots=true in "addons-809953"
	I0414 12:20:17.852219 1176576 addons.go:238] Setting addon volcano=true in "addons-809953"
	I0414 12:20:17.852227 1176576 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-809953"
	I0414 12:20:17.852244 1176576 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-809953"
	I0414 12:20:17.852261 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.852269 1176576 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-809953"
	I0414 12:20:17.852276 1176576 addons.go:69] Setting metrics-server=true in profile "addons-809953"
	I0414 12:20:17.852270 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.852298 1176576 addons.go:238] Setting addon metrics-server=true in "addons-809953"
	I0414 12:20:17.852320 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.852407 1176576 addons.go:69] Setting registry=true in profile "addons-809953"
	I0414 12:20:17.852427 1176576 addons.go:238] Setting addon registry=true in "addons-809953"
	I0414 12:20:17.852450 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.852545 1176576 addons.go:69] Setting storage-provisioner=true in profile "addons-809953"
	I0414 12:20:17.852575 1176576 addons.go:238] Setting addon storage-provisioner=true in "addons-809953"
	I0414 12:20:17.852606 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.852751 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.852781 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.852805 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.852817 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.852837 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.852850 1176576 addons.go:69] Setting gcp-auth=true in profile "addons-809953"
	I0414 12:20:17.852876 1176576 mustload.go:65] Loading cluster: addons-809953
	I0414 12:20:17.852887 1176576 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-809953"
	I0414 12:20:17.852903 1176576 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-809953"
	I0414 12:20:17.852254 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.852963 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.853007 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.853047 1176576 config.go:182] Loaded profile config "addons-809953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:20:17.853056 1176576 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-809953"
	I0414 12:20:17.853090 1176576 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-809953"
	I0414 12:20:17.853169 1176576 addons.go:69] Setting ingress-dns=true in profile "addons-809953"
	I0414 12:20:17.853189 1176576 addons.go:238] Setting addon ingress-dns=true in "addons-809953"
	I0414 12:20:17.853223 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.853280 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.853307 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.852255 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.853379 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.853402 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.853446 1176576 addons.go:69] Setting default-storageclass=true in profile "addons-809953"
	I0414 12:20:17.853458 1176576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-809953"
	I0414 12:20:17.853619 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.853642 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.853701 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.853704 1176576 addons.go:69] Setting ingress=true in profile "addons-809953"
	I0414 12:20:17.853719 1176576 addons.go:238] Setting addon ingress=true in "addons-809953"
	I0414 12:20:17.853726 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.852882 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.853782 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.853806 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.852284 1176576 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-809953"
	I0414 12:20:17.852833 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.853858 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.853910 1176576 addons.go:69] Setting cloud-spanner=true in profile "addons-809953"
	I0414 12:20:17.853940 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.853967 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.854127 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.853943 1176576 addons.go:238] Setting addon cloud-spanner=true in "addons-809953"
	I0414 12:20:17.854311 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.854385 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.854404 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.854423 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.854439 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.854534 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.854580 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.854753 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.854426 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.854819 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.855582 1176576 out.go:177] * Verifying Kubernetes components...
	I0414 12:20:17.857382 1176576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 12:20:17.872640 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I0414 12:20:17.873443 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.874124 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.874148 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.874575 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.875229 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.875300 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.875722 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33163
	I0414 12:20:17.875854 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0414 12:20:17.875962 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I0414 12:20:17.876386 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.876484 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.876494 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.876915 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.876938 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.877064 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.877089 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.877572 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.877683 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.877883 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0414 12:20:17.878287 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.878305 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.878350 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.878440 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.878802 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.878830 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.878930 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.879181 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.879449 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.879469 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.883963 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.884228 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.884296 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.884480 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.884519 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.885103 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.885144 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.900528 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.900619 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.912836 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I0414 12:20:17.913635 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.914326 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.914350 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.914873 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.915098 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:17.917440 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:17.919602 1176576 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 12:20:17.921287 1176576 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:20:17.921317 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 12:20:17.921350 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:17.925243 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.925431 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I0414 12:20:17.925469 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42263
	I0414 12:20:17.926197 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43441
	I0414 12:20:17.926221 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.926363 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.926363 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:17.926437 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.926761 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.926784 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.926860 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:17.926865 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.926939 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.926957 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.927127 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:17.927342 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:17.927444 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.927511 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.927562 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:17.928164 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.928184 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.928259 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:17.928519 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:17.928572 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.929204 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.929252 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.930487 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:17.931585 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42797
	I0414 12:20:17.932178 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.932340 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:17.932685 1176576 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0414 12:20:17.932814 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.932835 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.933212 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.933773 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.933818 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.934063 1176576 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 12:20:17.934097 1176576 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 12:20:17.934124 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:17.934199 1176576 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0414 12:20:17.934797 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44713
	I0414 12:20:17.935394 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.935866 1176576 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 12:20:17.935888 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0414 12:20:17.935912 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:17.936031 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.936048 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.936502 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.937129 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.937180 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.939430 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0414 12:20:17.939643 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.940161 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.940276 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:17.940295 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.940338 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:17.940548 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:17.940660 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.940688 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:17.940839 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:17.941151 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.941166 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.941232 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:17.941249 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.941366 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:17.941548 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:17.941797 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:17.941954 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:17.942138 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.942528 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:17.942621 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44647
	I0414 12:20:17.943547 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.944081 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.944104 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.944474 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.944648 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:17.944709 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:17.946728 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32937
	I0414 12:20:17.947747 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0414 12:20:17.947760 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0414 12:20:17.948237 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.948750 1176576 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0414 12:20:17.949067 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.949085 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.949171 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.949562 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.949601 1176576 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-809953"
	I0414 12:20:17.949647 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.949726 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.949739 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.950056 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.950095 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.950260 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.950292 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.950361 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.950604 1176576 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 12:20:17.950624 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0414 12:20:17.950653 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:17.950859 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.951387 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.951409 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.952060 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.952550 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.952591 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.953147 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:17.955134 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:17.955679 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.956156 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:17.956185 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.956561 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:17.956868 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:17.957062 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:17.957089 1176576 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0414 12:20:17.957268 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:17.958648 1176576 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0414 12:20:17.958676 1176576 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0414 12:20:17.958705 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:17.962876 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.963283 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:17.963308 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.963634 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:17.963928 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:17.964172 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:17.964233 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33397
	I0414 12:20:17.964592 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:17.965189 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0414 12:20:17.965764 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.965901 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.966497 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.966516 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.967007 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.967618 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.967721 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.968497 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.968518 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.969006 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.969605 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.969659 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.974675 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I0414 12:20:17.976514 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.977313 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.977341 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.977859 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.978098 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40307
	I0414 12:20:17.978342 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:17.979035 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.979171 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33679
	I0414 12:20:17.979877 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.979899 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.980608 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.981624 1176576 addons.go:238] Setting addon default-storageclass=true in "addons-809953"
	I0414 12:20:17.981670 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:17.982056 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.982100 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.982536 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42985
	I0414 12:20:17.982429 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.983342 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.983560 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.983579 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.984122 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.984144 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.984188 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.984752 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:17.984802 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:17.985516 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.985539 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.986097 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.986515 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:17.989282 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:17.991500 1176576 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0414 12:20:17.993167 1176576 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 12:20:17.993198 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0414 12:20:17.993230 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:17.996792 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0414 12:20:17.997398 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:17.997716 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.998125 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:17.998155 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:17.998268 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:17.998290 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:17.998634 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:17.998707 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:17.998897 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:17.999069 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:17.999105 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:17.999256 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:18.001093 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:18.002567 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I0414 12:20:18.003100 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:18.003204 1176576 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0414 12:20:18.003716 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:18.003737 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:18.004196 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:18.004417 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:18.004630 1176576 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0414 12:20:18.004655 1176576 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0414 12:20:18.004679 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:18.006716 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:18.007325 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:18.007398 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:18.008857 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.008985 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:18.008994 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:18.009002 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:18.009009 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:18.010552 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:18.010565 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43573
	I0414 12:20:18.010574 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:18.010588 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:18.010588 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:18.010600 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:18.010608 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.010632 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42561
	W0414 12:20:18.010703 1176576 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0414 12:20:18.011297 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:18.011390 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:18.011584 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:18.011794 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:18.011923 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:18.011939 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:18.012371 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:18.012607 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:18.014609 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:18.015025 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:18.015065 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:18.015879 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:18.016524 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40455
	I0414 12:20:18.016712 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:18.016735 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:18.016879 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:18.017291 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:18.017424 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:18.017443 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:18.017563 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:18.018565 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:18.018890 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:18.018943 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0414 12:20:18.019814 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:18.020395 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:18.020412 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:18.021037 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:18.021533 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:18.021595 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:18.021653 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:18.023588 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:18.023887 1176576 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.31
	I0414 12:20:18.024087 1176576 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0414 12:20:18.025134 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46321
	I0414 12:20:18.025416 1176576 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0414 12:20:18.025622 1176576 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0414 12:20:18.025637 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0414 12:20:18.025661 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:18.026735 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:18.027316 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:18.027341 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:18.027613 1176576 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0414 12:20:18.027631 1176576 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0414 12:20:18.027799 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:18.027937 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:18.029089 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:18.029985 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37307
	I0414 12:20:18.030529 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:18.031131 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:18.031183 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46335
	I0414 12:20:18.031327 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:18.031348 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:18.032024 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:18.032196 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:18.032291 1176576 out.go:177]   - Using image docker.io/registry:2.8.3
	I0414 12:20:18.032481 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:18.032882 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:18.032513 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:18.032830 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.032923 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45263
	I0414 12:20:18.033354 1176576 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0414 12:20:18.033910 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:18.033497 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:18.034313 1176576 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0414 12:20:18.034332 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0414 12:20:18.034448 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.034451 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:18.034387 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:18.034601 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:18.034836 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:18.034865 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:18.035214 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:18.035254 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.035353 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:18.035359 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:18.035384 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:18.035400 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.035420 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:18.035560 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:18.036059 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:18.036103 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:18.036137 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:18.036316 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:18.036346 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:18.036455 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:18.037585 1176576 out.go:177]   - Using image docker.io/busybox:stable
	I0414 12:20:18.038327 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:18.039385 1176576 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 12:20:18.039407 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0414 12:20:18.039433 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:18.040565 1176576 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0414 12:20:18.040977 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0414 12:20:18.042049 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:18.042180 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.042448 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:18.042710 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:18.042731 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:18.042962 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.043518 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:18.043536 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:18.043557 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.043601 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:18.043648 1176576 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 12:20:18.044095 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:18.043728 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:18.043796 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:18.043864 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:18.044185 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.044330 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:18.044388 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:18.044539 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:18.044527 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:18.045052 1176576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0414 12:20:18.045033 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:18.046415 1176576 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 12:20:18.047870 1176576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0414 12:20:18.048149 1176576 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 12:20:18.048178 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0414 12:20:18.048208 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:18.050301 1176576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0414 12:20:18.052794 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.052879 1176576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0414 12:20:18.053527 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:18.053556 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.053973 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:18.054278 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:18.054446 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:18.054577 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:18.056251 1176576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0414 12:20:18.056947 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0414 12:20:18.057487 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:18.058105 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:18.058138 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:18.058535 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:18.058758 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:18.058829 1176576 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0414 12:20:18.060208 1176576 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0414 12:20:18.061012 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:18.061316 1176576 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 12:20:18.061344 1176576 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 12:20:18.061372 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:18.063718 1176576 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0414 12:20:18.065679 1176576 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0414 12:20:18.065711 1176576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0414 12:20:18.065749 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:18.065883 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:18.065919 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.065951 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:18.065969 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.066850 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:18.067218 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:18.067522 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	W0414 12:20:18.068611 1176576 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34846->192.168.39.2:22: read: connection reset by peer
	I0414 12:20:18.068666 1176576 retry.go:31] will retry after 280.314741ms: ssh: handshake failed: read tcp 192.168.39.1:34846->192.168.39.2:22: read: connection reset by peer
	I0414 12:20:18.070579 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.071329 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:18.071373 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:18.071686 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:18.071960 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:18.072181 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:18.072323 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:18.312339 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 12:20:18.381600 1176576 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0414 12:20:18.381631 1176576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0414 12:20:18.429413 1176576 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0414 12:20:18.429451 1176576 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0414 12:20:18.432993 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 12:20:18.458578 1176576 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0414 12:20:18.458608 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0414 12:20:18.502670 1176576 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 12:20:18.502706 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0414 12:20:18.515487 1176576 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0414 12:20:18.515519 1176576 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0414 12:20:18.530415 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 12:20:18.557669 1176576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 12:20:18.557789 1176576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 12:20:18.561607 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 12:20:18.585547 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0414 12:20:18.591480 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 12:20:18.595289 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 12:20:18.630988 1176576 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0414 12:20:18.631027 1176576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0414 12:20:18.687799 1176576 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0414 12:20:18.687834 1176576 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0414 12:20:18.704202 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0414 12:20:18.778853 1176576 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0414 12:20:18.778880 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0414 12:20:18.806699 1176576 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0414 12:20:18.806741 1176576 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0414 12:20:18.819133 1176576 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 12:20:18.819167 1176576 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 12:20:18.827964 1176576 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0414 12:20:18.828000 1176576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0414 12:20:18.948116 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0414 12:20:18.958741 1176576 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0414 12:20:18.958782 1176576 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0414 12:20:18.971319 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 12:20:18.975570 1176576 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0414 12:20:18.975605 1176576 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0414 12:20:19.102701 1176576 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0414 12:20:19.102739 1176576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0414 12:20:19.105280 1176576 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 12:20:19.105310 1176576 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 12:20:19.239503 1176576 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0414 12:20:19.239534 1176576 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0414 12:20:19.253333 1176576 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0414 12:20:19.253371 1176576 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0414 12:20:19.277939 1176576 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0414 12:20:19.277993 1176576 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0414 12:20:19.458644 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 12:20:19.464908 1176576 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0414 12:20:19.464932 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0414 12:20:19.499785 1176576 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0414 12:20:19.499813 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0414 12:20:19.554398 1176576 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0414 12:20:19.554435 1176576 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0414 12:20:19.633076 1176576 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 12:20:19.633104 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0414 12:20:19.883286 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 12:20:19.889253 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0414 12:20:19.896730 1176576 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0414 12:20:19.896759 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0414 12:20:20.171749 1176576 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0414 12:20:20.171777 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0414 12:20:20.515624 1176576 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 12:20:20.515686 1176576 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0414 12:20:20.698590 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 12:20:23.126546 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.814167149s)
	I0414 12:20:23.126628 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:23.126641 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:23.126995 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:23.127021 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:23.127032 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:23.127039 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:23.127335 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:23.127355 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:23.127390 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:23.521267 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.088226218s)
	I0414 12:20:23.521332 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:23.521346 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:23.521278 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.990806821s)
	I0414 12:20:23.521362 1176576 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.963644522s)
	I0414 12:20:23.521509 1176576 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.96369361s)
	I0414 12:20:23.521535 1176576 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 12:20:23.521440 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:23.521579 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:23.521713 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.960073279s)
	I0414 12:20:23.521746 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:23.521756 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:23.521760 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:23.521780 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:23.521794 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:23.521804 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:23.521811 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:23.521819 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.936243384s)
	I0414 12:20:23.521822 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:23.521831 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:23.521840 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:23.521847 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:23.521840 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:23.521893 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:23.522314 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:23.522337 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:23.522354 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:23.522384 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:23.522390 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:23.522397 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:23.522403 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:23.522441 1176576 node_ready.go:35] waiting up to 6m0s for node "addons-809953" to be "Ready" ...
	I0414 12:20:23.522472 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:23.522478 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:23.522485 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:23.522491 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:23.522545 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:23.522567 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:23.522576 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:23.522696 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:23.522707 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:23.522990 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:23.523018 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:23.523026 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:23.523280 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:23.524457 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:23.524477 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:23.556814 1176576 node_ready.go:49] node "addons-809953" has status "Ready":"True"
	I0414 12:20:23.556861 1176576 node_ready.go:38] duration metric: took 34.394488ms for node "addons-809953" to be "Ready" ...
	I0414 12:20:23.556879 1176576 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 12:20:23.688232 1176576 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:23.713488 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:23.713526 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:23.713923 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:23.713966 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:23.713979 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:24.071047 1176576 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-809953" context rescaled to 1 replicas
	I0414 12:20:24.863072 1176576 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0414 12:20:24.863131 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:24.867244 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:24.867915 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:24.867955 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:24.868213 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:24.868487 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:24.868668 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:24.868912 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:25.453894 1176576 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0414 12:20:25.493265 1176576 addons.go:238] Setting addon gcp-auth=true in "addons-809953"
	I0414 12:20:25.493347 1176576 host.go:66] Checking if "addons-809953" exists ...
	I0414 12:20:25.493921 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:25.494000 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:25.510705 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43813
	I0414 12:20:25.511339 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:25.511893 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:25.511920 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:25.512379 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:25.513049 1176576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:20:25.513096 1176576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:20:25.530140 1176576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36867
	I0414 12:20:25.530822 1176576 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:20:25.531559 1176576 main.go:141] libmachine: Using API Version  1
	I0414 12:20:25.531597 1176576 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:20:25.532157 1176576 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:20:25.532423 1176576 main.go:141] libmachine: (addons-809953) Calling .GetState
	I0414 12:20:25.534438 1176576 main.go:141] libmachine: (addons-809953) Calling .DriverName
	I0414 12:20:25.534809 1176576 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0414 12:20:25.534850 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHHostname
	I0414 12:20:25.537935 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:25.538455 1176576 main.go:141] libmachine: (addons-809953) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:9a:ae", ip: ""} in network mk-addons-809953: {Iface:virbr1 ExpiryTime:2025-04-14 13:19:44 +0000 UTC Type:0 Mac:52:54:00:22:9a:ae Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-809953 Clientid:01:52:54:00:22:9a:ae}
	I0414 12:20:25.538495 1176576 main.go:141] libmachine: (addons-809953) DBG | domain addons-809953 has defined IP address 192.168.39.2 and MAC address 52:54:00:22:9a:ae in network mk-addons-809953
	I0414 12:20:25.538712 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHPort
	I0414 12:20:25.539028 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHKeyPath
	I0414 12:20:25.539307 1176576 main.go:141] libmachine: (addons-809953) Calling .GetSSHUsername
	I0414 12:20:25.539586 1176576 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/addons-809953/id_rsa Username:docker}
	I0414 12:20:25.773104 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:26.980208 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.388676325s)
	I0414 12:20:26.980273 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.980285 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.980339 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.276097058s)
	I0414 12:20:26.980276 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.384937361s)
	I0414 12:20:26.980388 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.980391 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.980405 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.980407 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.980516 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.032349958s)
	I0414 12:20:26.980547 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.009189532s)
	I0414 12:20:26.980564 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.980581 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.980592 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.980583 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.980663 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.52198661s)
	I0414 12:20:26.980682 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.980693 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.980786 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.097464342s)
	W0414 12:20:26.980817 1176576 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 12:20:26.980843 1176576 retry.go:31] will retry after 333.91779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 12:20:26.980899 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.091603542s)
	I0414 12:20:26.980928 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.980938 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.981387 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.981415 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.981421 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.981419 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.981438 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.981444 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.981448 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.981455 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.981457 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.981459 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.981464 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.981478 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.981484 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.981490 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.981496 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.981466 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.981830 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.981834 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.981841 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.981841 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.981850 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.981847 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.981858 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.981866 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.981869 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.981873 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.981877 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.981885 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.981893 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.981913 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.982109 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.982118 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.982126 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:26.982132 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:26.982404 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.982447 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.982460 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.982472 1176576 addons.go:479] Verifying addon metrics-server=true in "addons-809953"
	I0414 12:20:26.982622 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.982651 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.982679 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.982686 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.982744 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.982773 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.982783 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.982792 1176576 addons.go:479] Verifying addon ingress=true in "addons-809953"
	I0414 12:20:26.983333 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.983364 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.983378 1176576 addons.go:479] Verifying addon registry=true in "addons-809953"
	I0414 12:20:26.984803 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.984838 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.984856 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:26.984890 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.984896 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.985084 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.985108 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.985321 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:26.985340 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:26.985522 1176576 out.go:177] * Verifying ingress addon...
	I0414 12:20:26.986188 1176576 out.go:177] * Verifying registry addon...
	I0414 12:20:26.987171 1176576 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-809953 service yakd-dashboard -n yakd-dashboard
	
	I0414 12:20:26.988126 1176576 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0414 12:20:26.989046 1176576 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0414 12:20:27.011815 1176576 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0414 12:20:27.011845 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:27.011998 1176576 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0414 12:20:27.012022 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:27.038324 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:27.038350 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:27.038737 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:27.038784 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:27.038799 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:27.315940 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 12:20:27.508062 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:27.508446 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:28.003243 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:28.003904 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:28.226956 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:28.288510 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.589859907s)
	I0414 12:20:28.288567 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:28.288584 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:28.288669 1176576 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.75382332s)
	I0414 12:20:28.289023 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:28.289023 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:28.289103 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:28.289122 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:28.289137 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:28.289514 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:28.289544 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:28.289563 1176576 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-809953"
	I0414 12:20:28.289518 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:28.291578 1176576 out.go:177] * Verifying csi-hostpath-driver addon...
	I0414 12:20:28.291594 1176576 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 12:20:28.293629 1176576 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0414 12:20:28.294331 1176576 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0414 12:20:28.295027 1176576 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0414 12:20:28.295054 1176576 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0414 12:20:28.338566 1176576 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0414 12:20:28.338599 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:28.411144 1176576 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0414 12:20:28.411177 1176576 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0414 12:20:28.474374 1176576 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 12:20:28.474409 1176576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0414 12:20:28.492437 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:28.493271 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:28.518095 1176576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 12:20:28.802455 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:28.996672 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:28.997152 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:29.299537 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:29.494021 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:29.494024 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:29.670515 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.354480064s)
	I0414 12:20:29.670594 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:29.670610 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:29.671024 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:29.671052 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:29.671069 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:29.671079 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:29.671079 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:29.671355 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:29.671371 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:29.852175 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:30.003910 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:30.004701 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:30.163610 1176576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.645461749s)
	I0414 12:20:30.163721 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:30.163744 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:30.164132 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:30.164158 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:30.164170 1176576 main.go:141] libmachine: Making call to close driver server
	I0414 12:20:30.164181 1176576 main.go:141] libmachine: (addons-809953) Calling .Close
	I0414 12:20:30.164647 1176576 main.go:141] libmachine: (addons-809953) DBG | Closing plugin on server side
	I0414 12:20:30.164724 1176576 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:20:30.164799 1176576 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:20:30.166069 1176576 addons.go:479] Verifying addon gcp-auth=true in "addons-809953"
	I0414 12:20:30.168217 1176576 out.go:177] * Verifying gcp-auth addon...
	I0414 12:20:30.171000 1176576 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0414 12:20:30.198260 1176576 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0414 12:20:30.198287 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:30.305821 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:30.492691 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:30.492831 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:30.675054 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:30.696235 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:30.798130 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:30.997000 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:30.997230 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:31.174622 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:31.313879 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:31.495514 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:31.495634 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:31.674513 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:31.798906 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:31.992412 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:31.994482 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:32.174578 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:32.298876 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:32.492504 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:32.492843 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:32.675018 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:32.802008 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:32.992400 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:32.993134 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:33.174746 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:33.194415 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:33.297810 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:33.493463 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:33.493483 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:33.674605 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:34.011150 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:34.011157 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:34.011179 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:34.174667 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:34.298854 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:34.492091 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:34.494201 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:34.673985 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:34.799287 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:34.992306 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:34.992668 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:35.175408 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:35.195229 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:35.299595 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:35.492081 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:35.492373 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:35.675252 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:35.798800 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:35.994838 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:35.994901 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:36.175807 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:36.299183 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:36.493550 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:36.493760 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:36.674689 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:36.798403 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:36.991734 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:36.992576 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:37.175405 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:37.197411 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:37.299144 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:37.492718 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:37.493222 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:37.674244 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:37.798953 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:37.992940 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:37.993091 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:38.174769 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:38.299357 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:38.493256 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:38.493296 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:38.674171 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:38.799340 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:38.992416 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:38.992860 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:39.175298 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:39.298662 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:39.492279 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:39.493390 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:39.674509 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:39.696372 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:39.797624 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:39.993667 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:39.993772 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:40.174826 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:40.298566 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:40.491814 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:40.493826 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:40.675260 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:40.797771 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:40.992547 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:40.993217 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:41.276419 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:41.298673 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:41.491445 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:41.492673 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:41.682172 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:41.798590 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:41.991579 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:41.992064 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:42.174496 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:42.194444 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:42.297645 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:42.492261 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:42.493015 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:42.675210 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:42.799419 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:42.991559 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:42.994103 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:43.410503 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:43.410543 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:43.494096 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:43.494222 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:43.674589 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:43.798517 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:43.994439 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:43.994625 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:44.174241 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:44.194855 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:44.298222 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:44.493064 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:44.493488 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:44.674512 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:44.798494 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:44.993221 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:44.997161 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:45.174271 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:45.298435 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:45.953704 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:45.954127 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:45.955061 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:45.955625 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:46.053850 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:46.054204 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:46.174539 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:46.298222 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:46.492191 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:46.493493 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:46.674182 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:46.696546 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:46.797875 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:46.992659 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:46.992660 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:47.174908 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:47.301057 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:47.492965 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:47.493422 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:47.674909 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:47.799648 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:47.992803 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:47.992842 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:48.174736 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:48.299125 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:48.493225 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:48.493297 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:48.674699 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:48.798259 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:48.992480 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:48.993407 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:49.174982 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:49.195002 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:49.298911 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:49.493063 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:49.494013 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:49.674467 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:49.798612 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:50.449451 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:50.449625 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:50.454128 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:50.454350 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:50.493671 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:50.494825 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:50.675075 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:50.798608 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:50.993157 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:50.993660 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:51.175288 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:51.205843 1176576 pod_ready.go:103] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"False"
	I0414 12:20:51.299325 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:51.491708 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:51.492839 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:51.675941 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:51.695264 1176576 pod_ready.go:93] pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace has status "Ready":"True"
	I0414 12:20:51.695324 1176576 pod_ready.go:82] duration metric: took 28.007028572s for pod "amd-gpu-device-plugin-ct7wb" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:51.695343 1176576 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2twr9" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:51.698090 1176576 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-2twr9" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-2twr9" not found
	I0414 12:20:51.698131 1176576 pod_ready.go:82] duration metric: took 2.776732ms for pod "coredns-668d6bf9bc-2twr9" in "kube-system" namespace to be "Ready" ...
	E0414 12:20:51.698149 1176576 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-2twr9" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-2twr9" not found
	I0414 12:20:51.698159 1176576 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-dglhn" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:51.709822 1176576 pod_ready.go:93] pod "coredns-668d6bf9bc-dglhn" in "kube-system" namespace has status "Ready":"True"
	I0414 12:20:51.709851 1176576 pod_ready.go:82] duration metric: took 11.683795ms for pod "coredns-668d6bf9bc-dglhn" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:51.709862 1176576 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-809953" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:51.716429 1176576 pod_ready.go:93] pod "etcd-addons-809953" in "kube-system" namespace has status "Ready":"True"
	I0414 12:20:51.716468 1176576 pod_ready.go:82] duration metric: took 6.597606ms for pod "etcd-addons-809953" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:51.716485 1176576 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-809953" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:51.722787 1176576 pod_ready.go:93] pod "kube-apiserver-addons-809953" in "kube-system" namespace has status "Ready":"True"
	I0414 12:20:51.722819 1176576 pod_ready.go:82] duration metric: took 6.326033ms for pod "kube-apiserver-addons-809953" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:51.722832 1176576 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-809953" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:51.798736 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:51.892961 1176576 pod_ready.go:93] pod "kube-controller-manager-addons-809953" in "kube-system" namespace has status "Ready":"True"
	I0414 12:20:51.893002 1176576 pod_ready.go:82] duration metric: took 170.160292ms for pod "kube-controller-manager-addons-809953" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:51.893022 1176576 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lskr4" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:51.997290 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:51.997478 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:52.175228 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:52.292462 1176576 pod_ready.go:93] pod "kube-proxy-lskr4" in "kube-system" namespace has status "Ready":"True"
	I0414 12:20:52.292493 1176576 pod_ready.go:82] duration metric: took 399.46167ms for pod "kube-proxy-lskr4" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:52.292507 1176576 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-809953" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:52.299989 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:52.492923 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:52.493096 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:52.675130 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:52.692880 1176576 pod_ready.go:93] pod "kube-scheduler-addons-809953" in "kube-system" namespace has status "Ready":"True"
	I0414 12:20:52.692912 1176576 pod_ready.go:82] duration metric: took 400.39629ms for pod "kube-scheduler-addons-809953" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:52.692924 1176576 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-r2lh2" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:52.799141 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:52.992702 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:52.993156 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:53.092983 1176576 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-r2lh2" in "kube-system" namespace has status "Ready":"True"
	I0414 12:20:53.093019 1176576 pod_ready.go:82] duration metric: took 400.088204ms for pod "nvidia-device-plugin-daemonset-r2lh2" in "kube-system" namespace to be "Ready" ...
	I0414 12:20:53.093037 1176576 pod_ready.go:39] duration metric: took 29.536105297s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 12:20:53.093057 1176576 api_server.go:52] waiting for apiserver process to appear ...
	I0414 12:20:53.093131 1176576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:20:53.117304 1176576 api_server.go:72] duration metric: took 35.265327248s to wait for apiserver process to appear ...
	I0414 12:20:53.117339 1176576 api_server.go:88] waiting for apiserver healthz status ...
	I0414 12:20:53.117367 1176576 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0414 12:20:53.122616 1176576 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0414 12:20:53.123938 1176576 api_server.go:141] control plane version: v1.32.2
	I0414 12:20:53.123967 1176576 api_server.go:131] duration metric: took 6.621329ms to wait for apiserver health ...
	I0414 12:20:53.123975 1176576 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 12:20:53.175248 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:53.294841 1176576 system_pods.go:59] 18 kube-system pods found
	I0414 12:20:53.294899 1176576 system_pods.go:61] "amd-gpu-device-plugin-ct7wb" [12231d01-136a-4ff4-bc3c-cbe556d70f04] Running
	I0414 12:20:53.294909 1176576 system_pods.go:61] "coredns-668d6bf9bc-dglhn" [a6c12ef1-9bc6-4c9c-bad2-91c3183cf369] Running
	I0414 12:20:53.294921 1176576 system_pods.go:61] "csi-hostpath-attacher-0" [7f35da1d-877e-416b-89d1-6946a28bb786] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0414 12:20:53.294933 1176576 system_pods.go:61] "csi-hostpath-resizer-0" [4a1aa3b6-a3a3-4764-98b9-b13679a80f74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0414 12:20:53.294948 1176576 system_pods.go:61] "csi-hostpathplugin-mqj29" [ec27f403-a116-4dc2-bf2c-39e62a1036d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 12:20:53.294957 1176576 system_pods.go:61] "etcd-addons-809953" [5327d6ff-507e-4c9e-9678-4507f6e58841] Running
	I0414 12:20:53.294964 1176576 system_pods.go:61] "kube-apiserver-addons-809953" [bb484a4e-3b35-4b8c-8bac-9615d3047f2c] Running
	I0414 12:20:53.294970 1176576 system_pods.go:61] "kube-controller-manager-addons-809953" [63e859ec-c71e-4809-931f-0786a409d428] Running
	I0414 12:20:53.294976 1176576 system_pods.go:61] "kube-ingress-dns-minikube" [5025d586-656e-41ce-8376-fb575a821770] Running
	I0414 12:20:53.294981 1176576 system_pods.go:61] "kube-proxy-lskr4" [7775ea2d-48b8-4556-aeed-31c03c49ca89] Running
	I0414 12:20:53.294988 1176576 system_pods.go:61] "kube-scheduler-addons-809953" [db605bb0-8f6c-41fb-9935-17c9f280093d] Running
	I0414 12:20:53.294999 1176576 system_pods.go:61] "metrics-server-7fbb699795-fcnxn" [c969a671-fea5-45a8-9791-7229dea7d2c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 12:20:53.295006 1176576 system_pods.go:61] "nvidia-device-plugin-daemonset-r2lh2" [180e8b31-3785-499d-a052-9c37fdd10c40] Running
	I0414 12:20:53.295015 1176576 system_pods.go:61] "registry-6c88467877-zxv7w" [0249f203-5f55-4230-83cb-eaf56a33b5e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0414 12:20:53.295057 1176576 system_pods.go:61] "registry-proxy-bxxsm" [46066842-76d1-49e9-8cc5-e7b3e9617fd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0414 12:20:53.295075 1176576 system_pods.go:61] "snapshot-controller-68b874b76f-gstbw" [bf21029f-92d5-49df-8c6c-36958aeb8133] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 12:20:53.295084 1176576 system_pods.go:61] "snapshot-controller-68b874b76f-hgklf" [d20348f4-3b94-4789-9520-97fb293e0496] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 12:20:53.295088 1176576 system_pods.go:61] "storage-provisioner" [2f1f6c10-14a2-455c-8caf-39b70b1165f4] Running
	I0414 12:20:53.295096 1176576 system_pods.go:74] duration metric: took 171.115065ms to wait for pod list to return data ...
	I0414 12:20:53.295109 1176576 default_sa.go:34] waiting for default service account to be created ...
	I0414 12:20:53.298772 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:53.492552 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:53.493034 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:53.493321 1176576 default_sa.go:45] found service account: "default"
	I0414 12:20:53.493344 1176576 default_sa.go:55] duration metric: took 198.228592ms for default service account to be created ...
	I0414 12:20:53.493353 1176576 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 12:20:53.674474 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:53.693710 1176576 system_pods.go:86] 18 kube-system pods found
	I0414 12:20:53.693756 1176576 system_pods.go:89] "amd-gpu-device-plugin-ct7wb" [12231d01-136a-4ff4-bc3c-cbe556d70f04] Running
	I0414 12:20:53.693765 1176576 system_pods.go:89] "coredns-668d6bf9bc-dglhn" [a6c12ef1-9bc6-4c9c-bad2-91c3183cf369] Running
	I0414 12:20:53.693775 1176576 system_pods.go:89] "csi-hostpath-attacher-0" [7f35da1d-877e-416b-89d1-6946a28bb786] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0414 12:20:53.693781 1176576 system_pods.go:89] "csi-hostpath-resizer-0" [4a1aa3b6-a3a3-4764-98b9-b13679a80f74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0414 12:20:53.693793 1176576 system_pods.go:89] "csi-hostpathplugin-mqj29" [ec27f403-a116-4dc2-bf2c-39e62a1036d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 12:20:53.693800 1176576 system_pods.go:89] "etcd-addons-809953" [5327d6ff-507e-4c9e-9678-4507f6e58841] Running
	I0414 12:20:53.693806 1176576 system_pods.go:89] "kube-apiserver-addons-809953" [bb484a4e-3b35-4b8c-8bac-9615d3047f2c] Running
	I0414 12:20:53.693812 1176576 system_pods.go:89] "kube-controller-manager-addons-809953" [63e859ec-c71e-4809-931f-0786a409d428] Running
	I0414 12:20:53.693856 1176576 system_pods.go:89] "kube-ingress-dns-minikube" [5025d586-656e-41ce-8376-fb575a821770] Running
	I0414 12:20:53.693872 1176576 system_pods.go:89] "kube-proxy-lskr4" [7775ea2d-48b8-4556-aeed-31c03c49ca89] Running
	I0414 12:20:53.693878 1176576 system_pods.go:89] "kube-scheduler-addons-809953" [db605bb0-8f6c-41fb-9935-17c9f280093d] Running
	I0414 12:20:53.693888 1176576 system_pods.go:89] "metrics-server-7fbb699795-fcnxn" [c969a671-fea5-45a8-9791-7229dea7d2c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0414 12:20:53.693898 1176576 system_pods.go:89] "nvidia-device-plugin-daemonset-r2lh2" [180e8b31-3785-499d-a052-9c37fdd10c40] Running
	I0414 12:20:53.693910 1176576 system_pods.go:89] "registry-6c88467877-zxv7w" [0249f203-5f55-4230-83cb-eaf56a33b5e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0414 12:20:53.693921 1176576 system_pods.go:89] "registry-proxy-bxxsm" [46066842-76d1-49e9-8cc5-e7b3e9617fd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0414 12:20:53.693930 1176576 system_pods.go:89] "snapshot-controller-68b874b76f-gstbw" [bf21029f-92d5-49df-8c6c-36958aeb8133] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 12:20:53.693942 1176576 system_pods.go:89] "snapshot-controller-68b874b76f-hgklf" [d20348f4-3b94-4789-9520-97fb293e0496] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 12:20:53.693948 1176576 system_pods.go:89] "storage-provisioner" [2f1f6c10-14a2-455c-8caf-39b70b1165f4] Running
	I0414 12:20:53.693959 1176576 system_pods.go:126] duration metric: took 200.599738ms to wait for k8s-apps to be running ...
	I0414 12:20:53.693974 1176576 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 12:20:53.694035 1176576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:20:53.730005 1176576 system_svc.go:56] duration metric: took 36.014986ms WaitForService to wait for kubelet
	I0414 12:20:53.730053 1176576 kubeadm.go:582] duration metric: took 35.878104383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 12:20:53.730090 1176576 node_conditions.go:102] verifying NodePressure condition ...
	I0414 12:20:53.798170 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:53.893440 1176576 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 12:20:53.893475 1176576 node_conditions.go:123] node cpu capacity is 2
	I0414 12:20:53.893492 1176576 node_conditions.go:105] duration metric: took 163.39506ms to run NodePressure ...
	I0414 12:20:53.893509 1176576 start.go:241] waiting for startup goroutines ...
	I0414 12:20:53.993172 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:53.993313 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:54.174241 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:54.299566 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:54.493473 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:54.494020 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:54.674974 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:54.798247 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:54.993207 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:54.993733 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:55.174937 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:55.298036 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:55.492828 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:55.492978 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:55.675017 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:55.798850 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:55.991986 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:55.992851 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:56.175360 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:56.298832 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:56.493076 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:56.493744 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:56.674723 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:56.798625 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:56.991972 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:56.993667 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:57.174862 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:57.298474 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:57.491951 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:57.492676 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:57.675182 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:57.803243 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:57.993258 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:57.993650 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:58.175158 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:58.299763 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:58.496485 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:58.496485 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:58.674476 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:58.798673 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:58.992952 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:58.993115 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:59.174088 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:59.298404 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:59.494643 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:20:59.496400 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:59.674851 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:20:59.798395 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:20:59.992528 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:20:59.993278 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:00.176346 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:00.299336 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:00.491527 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:00.495440 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:00.675922 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:00.799088 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:00.993951 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:00.994262 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:01.174639 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:01.297588 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:01.493405 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:01.495928 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:01.675044 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:01.798687 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:01.992988 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:01.993495 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:02.175636 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:02.298940 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:02.495562 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:02.495868 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:02.676483 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:02.798555 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:02.993074 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:02.993163 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:03.175231 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:03.299647 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:03.492821 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:03.493010 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:03.674423 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:03.798069 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:03.993031 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:03.993192 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:04.174608 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:04.298446 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:04.491897 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:04.493335 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:04.675070 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:04.798959 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:04.993390 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:04.993425 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:05.175170 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:05.298661 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:05.492293 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:05.492893 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:05.675514 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:05.798346 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:05.991487 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:05.992638 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:06.175052 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:06.299187 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:06.493059 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:06.493140 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:06.675888 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:06.798728 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:06.993104 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:06.993306 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:07.174323 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:07.299920 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:07.494769 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:07.494776 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:07.674743 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:07.798442 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:07.997614 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:07.998198 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:08.175060 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:08.299050 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:08.493933 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:08.494259 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:08.754365 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:08.798757 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:08.992346 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:08.992603 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:09.174748 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:09.298221 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:09.493226 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:09.493819 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:09.674574 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:09.798066 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:09.992752 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:09.993019 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:10.174500 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:10.299875 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:10.492374 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:10.493151 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:10.674064 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:10.798897 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:10.993352 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:10.993363 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:11.174639 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:11.298447 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:11.491764 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:11.496330 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:11.675175 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:11.798378 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:11.993541 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:11.994001 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:12.179706 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:12.299287 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:12.493359 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:12.493573 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:12.674777 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:12.799395 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:12.993450 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:12.993869 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:13.174915 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:13.299495 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:13.491889 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:13.494314 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:13.674836 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:13.797927 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:13.992914 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:13.993769 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:14.175206 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:14.298762 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:14.492857 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:14.494330 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 12:21:14.676088 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:14.798635 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:14.994138 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:14.994292 1176576 kapi.go:107] duration metric: took 48.005251045s to wait for kubernetes.io/minikube-addons=registry ...
	I0414 12:21:15.175026 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:15.298994 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:15.495378 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:15.674769 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:15.798262 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:15.992050 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:16.174854 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:16.298196 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:16.493177 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:16.674324 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:16.803864 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:16.991930 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:17.174785 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:17.298328 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:17.492211 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:17.674530 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:17.798020 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:17.992234 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:18.174514 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:18.297495 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:18.492024 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:18.674955 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:18.798624 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:19.137831 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:19.175350 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:19.298696 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:19.493185 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:19.674732 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:19.798625 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:19.991942 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:20.178369 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:20.300263 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:20.491978 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:20.674736 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:20.798086 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:20.992611 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:21.175328 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:21.299207 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:21.492154 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:21.674055 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:21.806625 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:21.994761 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:22.177749 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:22.300427 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:22.496862 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:22.674437 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:22.799364 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:22.991614 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:23.175687 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:23.297997 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:23.492191 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:23.674343 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:23.798871 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:23.993815 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:24.175745 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:24.298239 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:24.492282 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:24.674797 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:24.798451 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:24.991735 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:25.178079 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:25.298728 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:25.493113 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:25.676122 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:25.798620 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:25.991559 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:26.175454 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:26.297532 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:26.493724 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:26.674161 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:26.799834 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:27.300278 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:27.300371 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:27.384420 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:27.493484 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:27.674818 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:27.800506 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:27.991339 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:28.180376 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:28.300035 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:28.492021 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:28.675067 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:28.800968 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:28.996252 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:29.175457 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:29.298874 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:29.493714 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:29.675331 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:29.798628 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:29.993341 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:30.178225 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:30.315403 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:30.492496 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:30.674113 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:30.798609 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:30.992164 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:31.174938 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:31.298221 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:31.494505 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:31.680185 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:31.799819 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:31.992560 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:32.175802 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:32.300697 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:32.493412 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:32.681642 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:32.799204 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:32.993384 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:33.175928 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:33.300253 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:33.494543 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:33.676318 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:33.798095 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:33.992517 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:34.175113 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:34.301708 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:34.493933 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:34.675167 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:34.799069 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:34.992706 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:35.387232 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:35.387227 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:35.492991 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:35.676570 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:35.798823 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:35.992679 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:36.174903 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:36.299879 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:36.492158 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:36.673899 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:36.798673 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:36.993184 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:37.175867 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:37.298025 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:37.492655 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:37.674828 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:37.798205 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:37.991609 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:38.685582 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:38.689181 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:38.691492 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:38.691728 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:38.798697 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:38.992139 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:39.175221 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:39.298671 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:39.495007 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:39.675164 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:39.798263 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:39.992569 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:40.176250 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:40.298719 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:40.496128 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:40.677645 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:40.798931 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:40.992740 1176576 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 12:21:41.176740 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:41.298235 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:41.894170 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:41.894282 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:41.894504 1176576 kapi.go:107] duration metric: took 1m14.906377532s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0414 12:21:42.193551 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:42.298683 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:42.674915 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:42.799247 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:43.176243 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:43.298801 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:43.675585 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:43.798097 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:44.175523 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:44.298766 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:44.674824 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:44.798253 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:45.174997 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:45.300195 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:45.676976 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 12:21:45.802680 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:46.175081 1176576 kapi.go:107] duration metric: took 1m16.004086723s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0414 12:21:46.177192 1176576 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-809953 cluster.
	I0414 12:21:46.178663 1176576 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0414 12:21:46.180123 1176576 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0414 12:21:46.298787 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:46.799480 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:47.299156 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:47.798683 1176576 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 12:21:48.298372 1176576 kapi.go:107] duration metric: took 1m20.004038819s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0414 12:21:48.300454 1176576 out.go:177] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, ingress-dns, cloud-spanner, storage-provisioner-rancher, metrics-server, inspektor-gadget, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0414 12:21:48.302069 1176576 addons.go:514] duration metric: took 1m30.450059366s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin ingress-dns cloud-spanner storage-provisioner-rancher metrics-server inspektor-gadget nvidia-device-plugin yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0414 12:21:48.302145 1176576 start.go:246] waiting for cluster config update ...
	I0414 12:21:48.302172 1176576 start.go:255] writing updated cluster config ...
	I0414 12:21:48.302612 1176576 ssh_runner.go:195] Run: rm -f paused
	I0414 12:21:48.359634 1176576 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 12:21:48.362078 1176576 out.go:177] * Done! kubectl is now configured to use "addons-809953" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.907183041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf6b7619-5cb3-467d-a2d0-e0291e2c7e91 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.907252358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf6b7619-5cb3-467d-a2d0-e0291e2c7e91 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.907542723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c96f8578297bb963d972223a8c8f200fa0b0c818538acf3af962c24839a990d,PodSandboxId:c694dca47dc1a4e491494c23ec5786e0815d270815aa3b007d8e79fd345838f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744633341776625910,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e74bcb1d-5b31-4ba9-b1d1-c80d3be8f404,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:285ebc851008c1e271bd89d726e4c7c34641f42fe96ca74ac20fdc3e69d065b0,PodSandboxId:3157b13e58ebad372548ef168eef02ea3211c221f54f687c1e151224d54174ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744633313295695674,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ec9aa0c-1b19-4612-8c4a-edfb0a202b47,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead90e0735f84a7e5b010fbd03b4ebd9ec89c4215461908804a4241436644891,PodSandboxId:1bec071bff4f4df6784ad5a5fa422b3bf3c474d42735c7cd5afbdb26e1c17d5a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744633300129008628,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-7jw9p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ade92bc8-f38a-43f3-8458-2e2cadc89d93,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ddf65174080c196bb818acc8c2961f3a111d00578eda1eafa04a5b8e6143d5d7,PodSandboxId:7e21f0a159f920307b9d39530cf24980d1f2510ecc192ab9960d43b2a7d5c334,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744633283993183073,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2ccp8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea5e1832-9131-49dc-a437-6ffa7851ab88,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e589cfc2db0a96699694b433868f8a032278d7bde6ddfdc2fa0682db51ae184b,PodSandboxId:f2f394e588f2296f75562b96c0e8d05d1591644c040778d3010f8fdcda60cae1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744633283857544265,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m86bz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd80802b-098e-46f8-ba6e-96510115e41a,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c5725cae0acab0587aad12bb72966743e2a9bc5bd30640e80928db9cffa938,PodSandboxId:be866046d1062db416096599d358c4a2caf9a3f05bcd078691627239b4c22225,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744633250991588891,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ct7wb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12231d01-136a-4ff4-bc3c-cbe556d70f04,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874a78fd347b24050cbaa88eac4750024f5f426cd7ea716c11d0e909a895456d,PodSandboxId:f9066ba69b4745aaaa4dc2cc139af04423f5aa9befe45ce9ec0c993ce3881628,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744633235517156629,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5025d586-656e-41ce-8376-fb575a821770,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5df1f5bc8d444ac85372ccfff5ffced6265452036fdee8d5a842122747684dd,PodSandboxId:42c40fe05b843ea2e97831cd167bf706ae618d9949d8ba259c295be399dae4a3,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633225164013917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1f6c10-14a2-455c-8caf-39b70b1165f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3025206026698ec9de15d96d07c770250b9d5552d451a82dbcb731e62e76919,PodSandboxId:34acc80aff1b3f53ae8fbf3ca3d925ca239a6e71ae5fe9141e0155065032438a,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633223115231794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dglhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c12ef1-9bc6-4c9c-bad2-91c3183cf369,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:95977c588c52a47bbbf17190a3ac29889b5d4e7cd10a2869db6712be2794be00,PodSandboxId:cb63d32f8d222af4da108ecc394e8e5f269fe651c12807217dc7bc8a5c2d4d55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633219980194899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lskr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7775ea2d-48b8-4556-aeed-31c03c49ca89,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c6f93ddb3c6d4613a40369bd2ac
c9376c0a7aba6b94032282ee392a0775569,PodSandboxId:afa911ac50004340e3b1b2936526dfff05335191184ac36856ab68fcd210f824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633208291618922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccaa5688b85c6c1be9e6ccbebe9eb42,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2b36f9a64ebc976a9c73a541dfe7c295eadf66cc4a9
82847684c7c95010a6,PodSandboxId:320940f3c8a0ab045a6f119cd63b53dbc7fbeff4fdce680ad1109d4bebd77e99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633208277964394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ed7e11f1ff2e42b8446edb9eb5346c,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b95a98b063dea3890e69304fd90a006faf2
f9e7d1cf6e01262266221dc7ad28,PodSandboxId:6dd475c293a9b0460966206574b473103b23fb732ed78cc957173811f99484fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633208246978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2006b4988088ec33941eb602cd42e3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3648d3cb58bd21b7cb4c8908991a394c22e4ee016dfc0fa8f5742
a98c8da9f54,PodSandboxId:e372082e3633c9ce882cc8d843313dcdfac09d82e5dfb404cb001863881cbdc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633208187320228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd811825fab9291283ca44253b3884a4,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf6b7619-5cb3-467d-a2d0-e0291e2c7e91 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.939514738Z" level=debug msg="Too many requests to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: sleeping for 2.000000 seconds before next attempt" file="docker/docker_client.go:596"
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.948050576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6f8573d-7fd2-4d1e-bd12-be4cafe15b29 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.948134286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6f8573d-7fd2-4d1e-bd12-be4cafe15b29 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.949610448Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0c693ed-9655-41d0-9065-d15c2c2fe417 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.950936059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633479950902193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0c693ed-9655-41d0-9065-d15c2c2fe417 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.951662251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23be0a7a-5b75-4c82-8e5d-d2041f287640 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.951728417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23be0a7a-5b75-4c82-8e5d-d2041f287640 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.952099731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c96f8578297bb963d972223a8c8f200fa0b0c818538acf3af962c24839a990d,PodSandboxId:c694dca47dc1a4e491494c23ec5786e0815d270815aa3b007d8e79fd345838f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744633341776625910,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e74bcb1d-5b31-4ba9-b1d1-c80d3be8f404,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:285ebc851008c1e271bd89d726e4c7c34641f42fe96ca74ac20fdc3e69d065b0,PodSandboxId:3157b13e58ebad372548ef168eef02ea3211c221f54f687c1e151224d54174ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744633313295695674,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ec9aa0c-1b19-4612-8c4a-edfb0a202b47,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead90e0735f84a7e5b010fbd03b4ebd9ec89c4215461908804a4241436644891,PodSandboxId:1bec071bff4f4df6784ad5a5fa422b3bf3c474d42735c7cd5afbdb26e1c17d5a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744633300129008628,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-7jw9p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ade92bc8-f38a-43f3-8458-2e2cadc89d93,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ddf65174080c196bb818acc8c2961f3a111d00578eda1eafa04a5b8e6143d5d7,PodSandboxId:7e21f0a159f920307b9d39530cf24980d1f2510ecc192ab9960d43b2a7d5c334,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744633283993183073,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2ccp8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea5e1832-9131-49dc-a437-6ffa7851ab88,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e589cfc2db0a96699694b433868f8a032278d7bde6ddfdc2fa0682db51ae184b,PodSandboxId:f2f394e588f2296f75562b96c0e8d05d1591644c040778d3010f8fdcda60cae1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744633283857544265,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m86bz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd80802b-098e-46f8-ba6e-96510115e41a,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c5725cae0acab0587aad12bb72966743e2a9bc5bd30640e80928db9cffa938,PodSandboxId:be866046d1062db416096599d358c4a2caf9a3f05bcd078691627239b4c22225,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744633250991588891,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ct7wb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12231d01-136a-4ff4-bc3c-cbe556d70f04,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874a78fd347b24050cbaa88eac4750024f5f426cd7ea716c11d0e909a895456d,PodSandboxId:f9066ba69b4745aaaa4dc2cc139af04423f5aa9befe45ce9ec0c993ce3881628,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744633235517156629,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5025d586-656e-41ce-8376-fb575a821770,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5df1f5bc8d444ac85372ccfff5ffced6265452036fdee8d5a842122747684dd,PodSandboxId:42c40fe05b843ea2e97831cd167bf706ae618d9949d8ba259c295be399dae4a3,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633225164013917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1f6c10-14a2-455c-8caf-39b70b1165f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3025206026698ec9de15d96d07c770250b9d5552d451a82dbcb731e62e76919,PodSandboxId:34acc80aff1b3f53ae8fbf3ca3d925ca239a6e71ae5fe9141e0155065032438a,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633223115231794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dglhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c12ef1-9bc6-4c9c-bad2-91c3183cf369,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:95977c588c52a47bbbf17190a3ac29889b5d4e7cd10a2869db6712be2794be00,PodSandboxId:cb63d32f8d222af4da108ecc394e8e5f269fe651c12807217dc7bc8a5c2d4d55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633219980194899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lskr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7775ea2d-48b8-4556-aeed-31c03c49ca89,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c6f93ddb3c6d4613a40369bd2ac
c9376c0a7aba6b94032282ee392a0775569,PodSandboxId:afa911ac50004340e3b1b2936526dfff05335191184ac36856ab68fcd210f824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633208291618922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccaa5688b85c6c1be9e6ccbebe9eb42,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2b36f9a64ebc976a9c73a541dfe7c295eadf66cc4a9
82847684c7c95010a6,PodSandboxId:320940f3c8a0ab045a6f119cd63b53dbc7fbeff4fdce680ad1109d4bebd77e99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633208277964394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ed7e11f1ff2e42b8446edb9eb5346c,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b95a98b063dea3890e69304fd90a006faf2
f9e7d1cf6e01262266221dc7ad28,PodSandboxId:6dd475c293a9b0460966206574b473103b23fb732ed78cc957173811f99484fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633208246978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2006b4988088ec33941eb602cd42e3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3648d3cb58bd21b7cb4c8908991a394c22e4ee016dfc0fa8f5742
a98c8da9f54,PodSandboxId:e372082e3633c9ce882cc8d843313dcdfac09d82e5dfb404cb001863881cbdc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633208187320228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd811825fab9291283ca44253b3884a4,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23be0a7a-5b75-4c82-8e5d-d2041f287640 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.991928102Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f4acfd4-9419-40d3-861e-f2f4b458c588 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.992009507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f4acfd4-9419-40d3-861e-f2f4b458c588 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.993258557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=abca5c3b-234f-4d31-96ef-7dcf3b3e285d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.994402464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633479994374328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abca5c3b-234f-4d31-96ef-7dcf3b3e285d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.994990086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1563b622-cfe8-445f-8ca1-63f71c76b5bc name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.995050794Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1563b622-cfe8-445f-8ca1-63f71c76b5bc name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:39 addons-809953 crio[668]: time="2025-04-14 12:24:39.995349279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c96f8578297bb963d972223a8c8f200fa0b0c818538acf3af962c24839a990d,PodSandboxId:c694dca47dc1a4e491494c23ec5786e0815d270815aa3b007d8e79fd345838f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744633341776625910,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e74bcb1d-5b31-4ba9-b1d1-c80d3be8f404,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:285ebc851008c1e271bd89d726e4c7c34641f42fe96ca74ac20fdc3e69d065b0,PodSandboxId:3157b13e58ebad372548ef168eef02ea3211c221f54f687c1e151224d54174ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744633313295695674,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ec9aa0c-1b19-4612-8c4a-edfb0a202b47,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead90e0735f84a7e5b010fbd03b4ebd9ec89c4215461908804a4241436644891,PodSandboxId:1bec071bff4f4df6784ad5a5fa422b3bf3c474d42735c7cd5afbdb26e1c17d5a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744633300129008628,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-7jw9p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ade92bc8-f38a-43f3-8458-2e2cadc89d93,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ddf65174080c196bb818acc8c2961f3a111d00578eda1eafa04a5b8e6143d5d7,PodSandboxId:7e21f0a159f920307b9d39530cf24980d1f2510ecc192ab9960d43b2a7d5c334,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744633283993183073,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2ccp8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea5e1832-9131-49dc-a437-6ffa7851ab88,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e589cfc2db0a96699694b433868f8a032278d7bde6ddfdc2fa0682db51ae184b,PodSandboxId:f2f394e588f2296f75562b96c0e8d05d1591644c040778d3010f8fdcda60cae1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744633283857544265,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m86bz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd80802b-098e-46f8-ba6e-96510115e41a,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c5725cae0acab0587aad12bb72966743e2a9bc5bd30640e80928db9cffa938,PodSandboxId:be866046d1062db416096599d358c4a2caf9a3f05bcd078691627239b4c22225,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744633250991588891,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ct7wb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12231d01-136a-4ff4-bc3c-cbe556d70f04,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874a78fd347b24050cbaa88eac4750024f5f426cd7ea716c11d0e909a895456d,PodSandboxId:f9066ba69b4745aaaa4dc2cc139af04423f5aa9befe45ce9ec0c993ce3881628,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744633235517156629,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5025d586-656e-41ce-8376-fb575a821770,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5df1f5bc8d444ac85372ccfff5ffced6265452036fdee8d5a842122747684dd,PodSandboxId:42c40fe05b843ea2e97831cd167bf706ae618d9949d8ba259c295be399dae4a3,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633225164013917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1f6c10-14a2-455c-8caf-39b70b1165f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3025206026698ec9de15d96d07c770250b9d5552d451a82dbcb731e62e76919,PodSandboxId:34acc80aff1b3f53ae8fbf3ca3d925ca239a6e71ae5fe9141e0155065032438a,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633223115231794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dglhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c12ef1-9bc6-4c9c-bad2-91c3183cf369,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:95977c588c52a47bbbf17190a3ac29889b5d4e7cd10a2869db6712be2794be00,PodSandboxId:cb63d32f8d222af4da108ecc394e8e5f269fe651c12807217dc7bc8a5c2d4d55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633219980194899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lskr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7775ea2d-48b8-4556-aeed-31c03c49ca89,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c6f93ddb3c6d4613a40369bd2ac
c9376c0a7aba6b94032282ee392a0775569,PodSandboxId:afa911ac50004340e3b1b2936526dfff05335191184ac36856ab68fcd210f824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633208291618922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccaa5688b85c6c1be9e6ccbebe9eb42,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2b36f9a64ebc976a9c73a541dfe7c295eadf66cc4a9
82847684c7c95010a6,PodSandboxId:320940f3c8a0ab045a6f119cd63b53dbc7fbeff4fdce680ad1109d4bebd77e99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633208277964394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ed7e11f1ff2e42b8446edb9eb5346c,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b95a98b063dea3890e69304fd90a006faf2
f9e7d1cf6e01262266221dc7ad28,PodSandboxId:6dd475c293a9b0460966206574b473103b23fb732ed78cc957173811f99484fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633208246978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2006b4988088ec33941eb602cd42e3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3648d3cb58bd21b7cb4c8908991a394c22e4ee016dfc0fa8f5742
a98c8da9f54,PodSandboxId:e372082e3633c9ce882cc8d843313dcdfac09d82e5dfb404cb001863881cbdc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633208187320228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd811825fab9291283ca44253b3884a4,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1563b622-cfe8-445f-8ca1-63f71c76b5bc name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 14 12:24:40 addons-809953 crio[668]: time="2025-04-14 12:24:40.031604069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86fa5b01-2df4-446a-8d06-79acadafb171 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:40 addons-809953 crio[668]: time="2025-04-14 12:24:40.031680799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86fa5b01-2df4-446a-8d06-79acadafb171 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:24:40 addons-809953 crio[668]: time="2025-04-14 12:24:40.033085184Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=619e7edb-a614-4696-b383-e3bb82e338e1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:40 addons-809953 crio[668]: time="2025-04-14 12:24:40.034273172Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633480034242835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=619e7edb-a614-4696-b383-e3bb82e338e1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:24:40 addons-809953 crio[668]: time="2025-04-14 12:24:40.034937438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e639bba8-7340-4ef5-8fa5-bc7360e1c55f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:40 addons-809953 crio[668]: time="2025-04-14 12:24:40.035010971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e639bba8-7340-4ef5-8fa5-bc7360e1c55f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:24:40 addons-809953 crio[668]: time="2025-04-14 12:24:40.035321562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c96f8578297bb963d972223a8c8f200fa0b0c818538acf3af962c24839a990d,PodSandboxId:c694dca47dc1a4e491494c23ec5786e0815d270815aa3b007d8e79fd345838f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744633341776625910,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e74bcb1d-5b31-4ba9-b1d1-c80d3be8f404,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:285ebc851008c1e271bd89d726e4c7c34641f42fe96ca74ac20fdc3e69d065b0,PodSandboxId:3157b13e58ebad372548ef168eef02ea3211c221f54f687c1e151224d54174ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744633313295695674,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ec9aa0c-1b19-4612-8c4a-edfb0a202b47,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead90e0735f84a7e5b010fbd03b4ebd9ec89c4215461908804a4241436644891,PodSandboxId:1bec071bff4f4df6784ad5a5fa422b3bf3c474d42735c7cd5afbdb26e1c17d5a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744633300129008628,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-7jw9p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ade92bc8-f38a-43f3-8458-2e2cadc89d93,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ddf65174080c196bb818acc8c2961f3a111d00578eda1eafa04a5b8e6143d5d7,PodSandboxId:7e21f0a159f920307b9d39530cf24980d1f2510ecc192ab9960d43b2a7d5c334,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744633283993183073,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2ccp8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea5e1832-9131-49dc-a437-6ffa7851ab88,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e589cfc2db0a96699694b433868f8a032278d7bde6ddfdc2fa0682db51ae184b,PodSandboxId:f2f394e588f2296f75562b96c0e8d05d1591644c040778d3010f8fdcda60cae1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744633283857544265,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m86bz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dd80802b-098e-46f8-ba6e-96510115e41a,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c5725cae0acab0587aad12bb72966743e2a9bc5bd30640e80928db9cffa938,PodSandboxId:be866046d1062db416096599d358c4a2caf9a3f05bcd078691627239b4c22225,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744633250991588891,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ct7wb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12231d01-136a-4ff4-bc3c-cbe556d70f04,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:874a78fd347b24050cbaa88eac4750024f5f426cd7ea716c11d0e909a895456d,PodSandboxId:f9066ba69b4745aaaa4dc2cc139af04423f5aa9befe45ce9ec0c993ce3881628,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744633235517156629,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5025d586-656e-41ce-8376-fb575a821770,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5df1f5bc8d444ac85372ccfff5ffced6265452036fdee8d5a842122747684dd,PodSandboxId:42c40fe05b843ea2e97831cd167bf706ae618d9949d8ba259c295be399dae4a3,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633225164013917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1f6c10-14a2-455c-8caf-39b70b1165f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3025206026698ec9de15d96d07c770250b9d5552d451a82dbcb731e62e76919,PodSandboxId:34acc80aff1b3f53ae8fbf3ca3d925ca239a6e71ae5fe9141e0155065032438a,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633223115231794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dglhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c12ef1-9bc6-4c9c-bad2-91c3183cf369,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:95977c588c52a47bbbf17190a3ac29889b5d4e7cd10a2869db6712be2794be00,PodSandboxId:cb63d32f8d222af4da108ecc394e8e5f269fe651c12807217dc7bc8a5c2d4d55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633219980194899,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lskr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7775ea2d-48b8-4556-aeed-31c03c49ca89,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c6f93ddb3c6d4613a40369bd2ac
c9376c0a7aba6b94032282ee392a0775569,PodSandboxId:afa911ac50004340e3b1b2936526dfff05335191184ac36856ab68fcd210f824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633208291618922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccaa5688b85c6c1be9e6ccbebe9eb42,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2b36f9a64ebc976a9c73a541dfe7c295eadf66cc4a9
82847684c7c95010a6,PodSandboxId:320940f3c8a0ab045a6f119cd63b53dbc7fbeff4fdce680ad1109d4bebd77e99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633208277964394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ed7e11f1ff2e42b8446edb9eb5346c,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b95a98b063dea3890e69304fd90a006faf2
f9e7d1cf6e01262266221dc7ad28,PodSandboxId:6dd475c293a9b0460966206574b473103b23fb732ed78cc957173811f99484fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633208246978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2006b4988088ec33941eb602cd42e3ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3648d3cb58bd21b7cb4c8908991a394c22e4ee016dfc0fa8f5742
a98c8da9f54,PodSandboxId:e372082e3633c9ce882cc8d843313dcdfac09d82e5dfb404cb001863881cbdc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633208187320228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-809953,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd811825fab9291283ca44253b3884a4,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e639bba8-7340-4ef5-8fa5-bc7360e1c55f name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5c96f8578297b       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   c694dca47dc1a       nginx
	285ebc851008c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   3157b13e58eba       busybox
	ead90e0735f84       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             2 minutes ago       Running             controller                0                   1bec071bff4f4       ingress-nginx-controller-56d7c84fd4-7jw9p
	ddf65174080c1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   7e21f0a159f92       ingress-nginx-admission-patch-2ccp8
	e589cfc2db0a9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   f2f394e588f22       ingress-nginx-admission-create-m86bz
	c4c5725cae0ac       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     3 minutes ago       Running             amd-gpu-device-plugin     0                   be866046d1062       amd-gpu-device-plugin-ct7wb
	874a78fd347b2       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   f9066ba69b474       kube-ingress-dns-minikube
	d5df1f5bc8d44       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   42c40fe05b843       storage-provisioner
	d302520602669       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   34acc80aff1b3       coredns-668d6bf9bc-dglhn
	95977c588c52a       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             4 minutes ago       Running             kube-proxy                0                   cb63d32f8d222       kube-proxy-lskr4
	98c6f93ddb3c6       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             4 minutes ago       Running             kube-scheduler            0                   afa911ac50004       kube-scheduler-addons-809953
	5de2b36f9a64e       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             4 minutes ago       Running             kube-controller-manager   0                   320940f3c8a0a       kube-controller-manager-addons-809953
	3b95a98b063de       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             4 minutes ago       Running             kube-apiserver            0                   6dd475c293a9b       kube-apiserver-addons-809953
	3648d3cb58bd2       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   e372082e3633c       etcd-addons-809953
	
	
	==> coredns [d3025206026698ec9de15d96d07c770250b9d5552d451a82dbcb731e62e76919] <==
	[INFO] 10.244.0.8:41891 - 46922 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00022534s
	[INFO] 10.244.0.8:41891 - 44095 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000113669s
	[INFO] 10.244.0.8:41891 - 19408 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000291139s
	[INFO] 10.244.0.8:41891 - 49705 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000113522s
	[INFO] 10.244.0.8:41891 - 659 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000174586s
	[INFO] 10.244.0.8:41891 - 10962 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000105579s
	[INFO] 10.244.0.8:41891 - 14698 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00013989s
	[INFO] 10.244.0.8:46305 - 46828 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149957s
	[INFO] 10.244.0.8:46305 - 47117 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000401333s
	[INFO] 10.244.0.8:34535 - 11599 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122342s
	[INFO] 10.244.0.8:34535 - 11839 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000287317s
	[INFO] 10.244.0.8:41236 - 11661 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000134676s
	[INFO] 10.244.0.8:41236 - 11903 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000269813s
	[INFO] 10.244.0.8:59216 - 40365 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130106s
	[INFO] 10.244.0.8:59216 - 40556 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000260813s
	[INFO] 10.244.0.23:56099 - 19521 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000362522s
	[INFO] 10.244.0.23:33009 - 29042 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017381s
	[INFO] 10.244.0.23:44086 - 61285 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139494s
	[INFO] 10.244.0.23:52105 - 2949 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000176164s
	[INFO] 10.244.0.23:44324 - 963 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119333s
	[INFO] 10.244.0.23:43804 - 47089 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000144752s
	[INFO] 10.244.0.23:35178 - 38815 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003425551s
	[INFO] 10.244.0.23:46881 - 53377 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.00413608s
	[INFO] 10.244.0.27:58765 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000399966s
	[INFO] 10.244.0.27:49960 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000139148s
	
	
	==> describe nodes <==
	Name:               addons-809953
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-809953
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9d10b41d083ee2064b1b8c7e16503e13b1847696
	                    minikube.k8s.io/name=addons-809953
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T12_20_14_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-809953
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 12:20:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-809953
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 12:24:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 12:23:27 +0000   Mon, 14 Apr 2025 12:20:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 12:23:27 +0000   Mon, 14 Apr 2025 12:20:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 12:23:27 +0000   Mon, 14 Apr 2025 12:20:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 12:23:27 +0000   Mon, 14 Apr 2025 12:20:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    addons-809953
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 040a0c0e80bc4a29b4697adb24d11c25
	  System UUID:                040a0c0e-80bc-4a29-b469-7adb24d11c25
	  Boot ID:                    b8276555-dcf0-4718-bdaf-ecb2c6fef6b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  default                     hello-world-app-7d9564db4-bqbqg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-7jw9p    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m14s
	  kube-system                 amd-gpu-device-plugin-ct7wb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 coredns-668d6bf9bc-dglhn                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m22s
	  kube-system                 etcd-addons-809953                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m29s
	  kube-system                 kube-apiserver-addons-809953                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-controller-manager-addons-809953        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-lskr4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-scheduler-addons-809953                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m33s (x8 over 4m33s)  kubelet          Node addons-809953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m33s (x8 over 4m33s)  kubelet          Node addons-809953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m33s (x7 over 4m33s)  kubelet          Node addons-809953 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m27s                  kubelet          Node addons-809953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s                  kubelet          Node addons-809953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s                  kubelet          Node addons-809953 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m26s                  kubelet          Node addons-809953 status is now: NodeReady
	  Normal  RegisteredNode           4m23s                  node-controller  Node addons-809953 event: Registered Node addons-809953 in Controller
	
	
	==> dmesg <==
	[  +0.064175] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.993427] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	[  +0.084105] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.802257] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +1.022278] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.031670] kauditd_printk_skb: 89 callbacks suppressed
	[  +5.034974] kauditd_printk_skb: 123 callbacks suppressed
	[  +6.487267] kauditd_printk_skb: 93 callbacks suppressed
	[Apr14 12:21] kauditd_printk_skb: 7 callbacks suppressed
	[ +17.336510] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.003352] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.001353] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.140880] kauditd_printk_skb: 37 callbacks suppressed
	[ +10.752306] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.976277] kauditd_printk_skb: 9 callbacks suppressed
	[Apr14 12:22] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.454669] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.274045] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.124145] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.004229] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.794018] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.197302] kauditd_printk_skb: 13 callbacks suppressed
	[  +9.407373] kauditd_printk_skb: 41 callbacks suppressed
	[Apr14 12:23] kauditd_printk_skb: 15 callbacks suppressed
	[Apr14 12:24] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [3648d3cb58bd21b7cb4c8908991a394c22e4ee016dfc0fa8f5742a98c8da9f54] <==
	{"level":"info","ts":"2025-04-14T12:21:38.675762Z","caller":"traceutil/trace.go:171","msg":"trace[1917373973] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1119; }","duration":"120.536665ms","start":"2025-04-14T12:21:38.555213Z","end":"2025-04-14T12:21:38.675750Z","steps":["trace[1917373973] 'agreement among raft nodes before linearized reading'  (duration: 119.416704ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:21:38.674684Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.293113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:21:38.676062Z","caller":"traceutil/trace.go:171","msg":"trace[1270904863] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1119; }","duration":"388.689079ms","start":"2025-04-14T12:21:38.287365Z","end":"2025-04-14T12:21:38.676054Z","steps":["trace[1270904863] 'agreement among raft nodes before linearized reading'  (duration: 387.282107ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:21:38.676124Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T12:21:38.287352Z","time spent":"388.751001ms","remote":"127.0.0.1:40698","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-04-14T12:21:41.877350Z","caller":"traceutil/trace.go:171","msg":"trace[265356985] linearizableReadLoop","detail":"{readStateIndex:1164; appliedIndex:1163; }","duration":"430.05685ms","start":"2025-04-14T12:21:41.447277Z","end":"2025-04-14T12:21:41.877334Z","steps":["trace[265356985] 'read index received'  (duration: 429.852408ms)","trace[265356985] 'applied index is now lower than readState.Index'  (duration: 203.637µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T12:21:41.877498Z","caller":"traceutil/trace.go:171","msg":"trace[1381397907] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"457.499804ms","start":"2025-04-14T12:21:41.419990Z","end":"2025-04-14T12:21:41.877490Z","steps":["trace[1381397907] 'process raft request'  (duration: 457.191864ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:21:41.877602Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T12:21:41.419970Z","time spent":"457.56522ms","remote":"127.0.0.1:40676","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1120 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-04-14T12:21:41.877814Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"396.867547ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:21:41.878806Z","caller":"traceutil/trace.go:171","msg":"trace[1534681036] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"397.883804ms","start":"2025-04-14T12:21:41.480906Z","end":"2025-04-14T12:21:41.878790Z","steps":["trace[1534681036] 'agreement among raft nodes before linearized reading'  (duration: 396.841695ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:21:41.878167Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"430.914324ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:21:41.879203Z","caller":"traceutil/trace.go:171","msg":"trace[308615602] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1132; }","duration":"431.974018ms","start":"2025-04-14T12:21:41.447215Z","end":"2025-04-14T12:21:41.879189Z","steps":["trace[308615602] 'agreement among raft nodes before linearized reading'  (duration: 430.927238ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:21:41.879296Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T12:21:41.447199Z","time spent":"432.065265ms","remote":"127.0.0.1:40810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":28,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true "}
	{"level":"warn","ts":"2025-04-14T12:21:41.878243Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.929913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:21:41.879507Z","caller":"traceutil/trace.go:171","msg":"trace[1248251247] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"215.200071ms","start":"2025-04-14T12:21:41.664290Z","end":"2025-04-14T12:21:41.879490Z","steps":["trace[1248251247] 'agreement among raft nodes before linearized reading'  (duration: 213.941813ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:21:41.879305Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T12:21:41.480890Z","time spent":"398.393256ms","remote":"127.0.0.1:40698","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-04-14T12:21:41.878272Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"323.287397ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:21:41.879629Z","caller":"traceutil/trace.go:171","msg":"trace[2082521070] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1132; }","duration":"324.641012ms","start":"2025-04-14T12:21:41.554982Z","end":"2025-04-14T12:21:41.879623Z","steps":["trace[2082521070] 'agreement among raft nodes before linearized reading'  (duration: 323.271855ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T12:22:07.172141Z","caller":"traceutil/trace.go:171","msg":"trace[1743920272] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"196.967626ms","start":"2025-04-14T12:22:06.975159Z","end":"2025-04-14T12:22:07.172126Z","steps":["trace[1743920272] 'process raft request'  (duration: 196.855727ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T12:22:17.600425Z","caller":"traceutil/trace.go:171","msg":"trace[1811653544] linearizableReadLoop","detail":"{readStateIndex:1408; appliedIndex:1407; }","duration":"206.182443ms","start":"2025-04-14T12:22:17.394227Z","end":"2025-04-14T12:22:17.600410Z","steps":["trace[1811653544] 'read index received'  (duration: 206.065056ms)","trace[1811653544] 'applied index is now lower than readState.Index'  (duration: 116.955µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T12:22:17.600708Z","caller":"traceutil/trace.go:171","msg":"trace[698685376] transaction","detail":"{read_only:false; response_revision:1365; number_of_response:1; }","duration":"389.196107ms","start":"2025-04-14T12:22:17.211501Z","end":"2025-04-14T12:22:17.600697Z","steps":["trace[698685376] 'process raft request'  (duration: 388.833414ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:22:17.600823Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T12:22:17.211483Z","time spent":"389.281139ms","remote":"127.0.0.1:40782","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1311 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2025-04-14T12:22:17.601122Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.886465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-04-14T12:22:17.601148Z","caller":"traceutil/trace.go:171","msg":"trace[879311228] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1365; }","duration":"206.938372ms","start":"2025-04-14T12:22:17.394204Z","end":"2025-04-14T12:22:17.601142Z","steps":["trace[879311228] 'agreement among raft nodes before linearized reading'  (duration: 206.845854ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T12:22:48.918970Z","caller":"traceutil/trace.go:171","msg":"trace[56294377] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1616; }","duration":"262.867479ms","start":"2025-04-14T12:22:48.656089Z","end":"2025-04-14T12:22:48.918957Z","steps":["trace[56294377] 'process raft request'  (duration: 262.447081ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T12:22:56.840635Z","caller":"traceutil/trace.go:171","msg":"trace[2066577384] transaction","detail":"{read_only:false; response_revision:1646; number_of_response:1; }","duration":"137.025841ms","start":"2025-04-14T12:22:56.703595Z","end":"2025-04-14T12:22:56.840621Z","steps":["trace[2066577384] 'process raft request'  (duration: 136.941937ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:24:40 up 5 min,  0 users,  load average: 1.13, 1.52, 0.76
	Linux addons-809953 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3b95a98b063dea3890e69304fd90a006faf2f9e7d1cf6e01262266221dc7ad28] <==
	E0414 12:21:03.920771       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0414 12:22:00.178178       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:35592: use of closed network connection
	E0414 12:22:00.398324       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:35610: use of closed network connection
	I0414 12:22:09.949239       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.248.187"}
	I0414 12:22:15.041019       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0414 12:22:15.904643       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0414 12:22:16.131322       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.48.87"}
	W0414 12:22:16.199943       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0414 12:22:58.884199       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0414 12:22:58.951101       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0414 12:23:04.905748       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0414 12:23:19.220127       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 12:23:19.220181       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 12:23:19.248783       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 12:23:19.248822       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 12:23:19.265656       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 12:23:19.265735       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 12:23:19.281778       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 12:23:19.281929       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 12:23:19.327996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 12:23:19.328038       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0414 12:23:20.282963       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0414 12:23:20.328061       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0414 12:23:20.455653       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0414 12:24:38.835653       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.29.40"}
	
	
	==> kube-controller-manager [5de2b36f9a64ebc976a9c73a541dfe7c295eadf66cc4a982847684c7c95010a6] <==
	W0414 12:23:48.532951       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 12:23:48.533059       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 12:23:50.907806       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 12:23:50.908948       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0414 12:23:50.909915       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 12:23:50.909996       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 12:24:21.875761       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 12:24:21.877049       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0414 12:24:21.878282       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 12:24:21.878359       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 12:24:24.042133       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 12:24:24.043093       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0414 12:24:24.043998       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 12:24:24.044050       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 12:24:25.457015       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 12:24:25.458106       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0414 12:24:25.458979       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 12:24:25.459062       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 12:24:38.565499       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 12:24:38.567225       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0414 12:24:38.568958       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 12:24:38.569062       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0414 12:24:38.660316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="49.251343ms"
	I0414 12:24:38.681264       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="20.712726ms"
	I0414 12:24:38.681395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="40.537µs"
	
	
	==> kube-proxy [95977c588c52a47bbbf17190a3ac29889b5d4e7cd10a2869db6712be2794be00] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 12:20:21.865087       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 12:20:22.045041       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.2"]
	E0414 12:20:22.045221       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 12:20:22.638426       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 12:20:22.638487       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 12:20:22.638520       1 server_linux.go:170] "Using iptables Proxier"
	I0414 12:20:22.680348       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 12:20:22.683201       1 server.go:497] "Version info" version="v1.32.2"
	I0414 12:20:22.683240       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:20:22.700117       1 config.go:199] "Starting service config controller"
	I0414 12:20:22.700161       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 12:20:22.700190       1 config.go:105] "Starting endpoint slice config controller"
	I0414 12:20:22.700204       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 12:20:22.700720       1 config.go:329] "Starting node config controller"
	I0414 12:20:22.700730       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 12:20:22.800765       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 12:20:22.800890       1 shared_informer.go:320] Caches are synced for service config
	I0414 12:20:22.809314       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [98c6f93ddb3c6d4613a40369bd2acc9376c0a7aba6b94032282ee392a0775569] <==
	W0414 12:20:11.527720       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0414 12:20:11.527758       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0414 12:20:11.536644       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0414 12:20:11.536695       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:20:11.676146       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 12:20:11.677001       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:20:11.699016       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0414 12:20:11.699067       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:20:11.745937       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0414 12:20:11.745986       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:20:11.759085       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 12:20:11.759137       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:20:11.943979       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0414 12:20:11.944027       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:20:11.971607       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0414 12:20:11.971703       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:20:12.042741       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 12:20:12.042942       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 12:20:12.054506       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 12:20:12.054606       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 12:20:12.071574       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0414 12:20:12.071692       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0414 12:20:12.181200       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 12:20:12.181251       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0414 12:20:14.620111       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 12:24:13 addons-809953 kubelet[1242]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 12:24:13 addons-809953 kubelet[1242]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 12:24:13 addons-809953 kubelet[1242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 12:24:13 addons-809953 kubelet[1242]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 12:24:13 addons-809953 kubelet[1242]: E0414 12:24:13.567084    1242 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633453566428844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:24:13 addons-809953 kubelet[1242]: E0414 12:24:13.567115    1242 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633453566428844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:24:17 addons-809953 kubelet[1242]: I0414 12:24:17.404250    1242 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ct7wb" secret="" err="secret \"gcp-auth\" not found"
	Apr 14 12:24:23 addons-809953 kubelet[1242]: E0414 12:24:23.570383    1242 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633463569921179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:24:23 addons-809953 kubelet[1242]: E0414 12:24:23.570719    1242 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633463569921179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:24:25 addons-809953 kubelet[1242]: I0414 12:24:25.399471    1242 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 14 12:24:33 addons-809953 kubelet[1242]: E0414 12:24:33.575088    1242 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633473574469210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:24:33 addons-809953 kubelet[1242]: E0414 12:24:33.575470    1242 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633473574469210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.642894    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="7f35da1d-877e-416b-89d1-6946a28bb786" containerName="csi-attacher"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.643288    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="4a1aa3b6-a3a3-4764-98b9-b13679a80f74" containerName="csi-resizer"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.643333    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="ec27f403-a116-4dc2-bf2c-39e62a1036d8" containerName="csi-snapshotter"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.643369    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="d20348f4-3b94-4789-9520-97fb293e0496" containerName="volume-snapshot-controller"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.643400    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="2cf59810-db26-466e-b1f9-5221096574c8" containerName="task-pv-container"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.643431    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="ec27f403-a116-4dc2-bf2c-39e62a1036d8" containerName="hostpath"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.643462    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="ec27f403-a116-4dc2-bf2c-39e62a1036d8" containerName="csi-provisioner"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.643496    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="bf21029f-92d5-49df-8c6c-36958aeb8133" containerName="volume-snapshot-controller"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.643528    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="ec27f403-a116-4dc2-bf2c-39e62a1036d8" containerName="node-driver-registrar"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.643585    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="0627eadc-1223-4c1f-9107-6430390c4fd0" containerName="local-path-provisioner"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.643623    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="ec27f403-a116-4dc2-bf2c-39e62a1036d8" containerName="liveness-probe"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.643654    1242 memory_manager.go:355] "RemoveStaleState removing state" podUID="ec27f403-a116-4dc2-bf2c-39e62a1036d8" containerName="csi-external-health-monitor-controller"
	Apr 14 12:24:38 addons-809953 kubelet[1242]: I0414 12:24:38.751072    1242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lw24\" (UniqueName: \"kubernetes.io/projected/b02aed63-9163-4b26-88b2-3128e2c25c4a-kube-api-access-7lw24\") pod \"hello-world-app-7d9564db4-bqbqg\" (UID: \"b02aed63-9163-4b26-88b2-3128e2c25c4a\") " pod="default/hello-world-app-7d9564db4-bqbqg"
	
	
	==> storage-provisioner [d5df1f5bc8d444ac85372ccfff5ffced6265452036fdee8d5a842122747684dd] <==
	I0414 12:20:26.506261       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 12:20:26.654439       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 12:20:26.654496       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 12:20:26.787448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 12:20:26.787637       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-809953_7015b919-04d6-4ca9-ba87-b7d03c811906!
	I0414 12:20:26.788830       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d745931e-1e1a-48ce-8aae-9af2492de1b9", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-809953_7015b919-04d6-4ca9-ba87-b7d03c811906 became leader
	I0414 12:20:26.992832       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-809953_7015b919-04d6-4ca9-ba87-b7d03c811906!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-809953 -n addons-809953
helpers_test.go:261: (dbg) Run:  kubectl --context addons-809953 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-bqbqg ingress-nginx-admission-create-m86bz ingress-nginx-admission-patch-2ccp8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-809953 describe pod hello-world-app-7d9564db4-bqbqg ingress-nginx-admission-create-m86bz ingress-nginx-admission-patch-2ccp8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-809953 describe pod hello-world-app-7d9564db4-bqbqg ingress-nginx-admission-create-m86bz ingress-nginx-admission-patch-2ccp8: exit status 1 (80.250028ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-bqbqg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-809953/192.168.39.2
	Start Time:       Mon, 14 Apr 2025 12:24:38 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7lw24 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7lw24:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-bqbqg to addons-809953
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-m86bz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2ccp8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-809953 describe pod hello-world-app-7d9564db4-bqbqg ingress-nginx-admission-create-m86bz ingress-nginx-admission-patch-2ccp8: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-809953 addons disable ingress-dns --alsologtostderr -v=1: (1.662924793s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-809953 addons disable ingress --alsologtostderr -v=1: (7.848036111s)
--- FAIL: TestAddons/parallel/Ingress (155.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (190.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b300b501-5ebd-47df-a358-754ea70df398] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004036141s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-760045 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-760045 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-760045 get pvc myclaim -o=json
I0414 12:30:03.450030 1175746 retry.go:31] will retry after 1.575215201s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2014da10-195c-4c51-80df-c3118a12974a ResourceVersion:767 Generation:0 CreationTimestamp:2025-04-14 12:30:03 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-2014da10-195c-4c51-80df-c3118a12974a StorageClassName:0xc0018d91d0 VolumeMode:0xc0018d91e0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-760045 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-760045 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [043b9cf5-ee9c-4156-8778-2420e4c78ede] Pending
helpers_test.go:344: "sp-pod" [043b9cf5-ee9c-4156-8778-2420e4c78ede] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-760045 -n functional-760045
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-04-14 12:33:05.520972835 +0000 UTC m=+859.454180004
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-760045 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-760045 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-760045/192.168.39.48
Start Time:       Mon, 14 Apr 2025 12:30:05 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6bwl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-g6bwl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  3m                  default-scheduler  Successfully assigned default/sp-pod to functional-760045
Warning  Failed     114s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     40s (x2 over 114s)  kubelet            Error: ErrImagePull
Warning  Failed     40s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    25s (x2 over 113s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     25s (x2 over 113s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    14s (x3 over 3m)    kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-760045 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-760045 logs sp-pod -n default: exit status 1 (76.440245ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-760045 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-760045 -n functional-760045
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-760045 logs -n 25: (1.643970372s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-760045 ssh sudo cat                                          | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /etc/ssl/certs/11757462.pem                                             |                   |         |         |                     |                     |
	| ssh            | functional-760045 ssh sudo cat                                          | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /usr/share/ca-certificates/11757462.pem                                 |                   |         |         |                     |                     |
	| ssh            | functional-760045 ssh sudo cat                                          | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                               |                   |         |         |                     |                     |
	| license        |                                                                         | minikube          | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	| ssh            | functional-760045 ssh sudo                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC |                     |
	|                | systemctl is-active docker                                              |                   |         |         |                     |                     |
	| ssh            | functional-760045 ssh sudo                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC |                     |
	|                | systemctl is-active containerd                                          |                   |         |         |                     |                     |
	| image          | functional-760045 image load --daemon                                   | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | kicbase/echo-server:functional-760045                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045 image ls                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	| image          | functional-760045 image load --daemon                                   | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | kicbase/echo-server:functional-760045                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045 image ls                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	| image          | functional-760045 image save kicbase/echo-server:functional-760045      | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045 image rm                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | kicbase/echo-server:functional-760045                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045 image ls                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	| image          | functional-760045 image load                                            | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-760045 ssh sudo cat                                          | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /etc/test/nested/copy/1175746/hosts                                     |                   |         |         |                     |                     |
	| image          | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | image ls --format short                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | image ls --format yaml                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-760045 ssh pgrep                                             | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC |                     |
	|                | buildkitd                                                               |                   |         |         |                     |                     |
	| image          | functional-760045 image build -t                                        | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | localhost/my-image:functional-760045                                    |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-760045 image ls                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	| image          | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | image ls --format table                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | image ls --format json                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| update-context | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:30:43
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:30:43.547838 1183689 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:30:43.548264 1183689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:30:43.548284 1183689 out.go:358] Setting ErrFile to fd 2...
	I0414 12:30:43.548293 1183689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:30:43.548634 1183689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 12:30:43.549280 1183689 out.go:352] Setting JSON to false
	I0414 12:30:43.550584 1183689 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15191,"bootTime":1744618653,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:30:43.550714 1183689 start.go:139] virtualization: kvm guest
	I0414 12:30:43.553120 1183689 out.go:177] * [functional-760045] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:30:43.555095 1183689 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 12:30:43.555108 1183689 notify.go:220] Checking for updates...
	I0414 12:30:43.558295 1183689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:30:43.559959 1183689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 12:30:43.561746 1183689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 12:30:43.563374 1183689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:30:43.564876 1183689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:30:43.566745 1183689 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:30:43.567233 1183689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:30:43.567360 1183689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:30:43.586143 1183689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0414 12:30:43.586654 1183689 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:30:43.587291 1183689 main.go:141] libmachine: Using API Version  1
	I0414 12:30:43.587310 1183689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:30:43.587764 1183689 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:30:43.588013 1183689 main.go:141] libmachine: (functional-760045) Calling .DriverName
	I0414 12:30:43.588336 1183689 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:30:43.588656 1183689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:30:43.588704 1183689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:30:43.605224 1183689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36995
	I0414 12:30:43.605913 1183689 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:30:43.606553 1183689 main.go:141] libmachine: Using API Version  1
	I0414 12:30:43.606573 1183689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:30:43.607074 1183689 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:30:43.607344 1183689 main.go:141] libmachine: (functional-760045) Calling .DriverName
	I0414 12:30:43.651464 1183689 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 12:30:43.653085 1183689 start.go:297] selected driver: kvm2
	I0414 12:30:43.653112 1183689 start.go:901] validating driver "kvm2" against &{Name:functional-760045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-760045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:30:43.653253 1183689 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:30:43.654362 1183689 cni.go:84] Creating CNI manager for ""
	I0414 12:30:43.654419 1183689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:30:43.654472 1183689 start.go:340] cluster config:
	{Name:functional-760045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-760045 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:30:43.656713 1183689 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.475957750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=193413c1-d706-4ebc-b4da-032116bc9cb4 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.477273829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c539b64c-ccff-4582-821d-ff5d98b9787b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.478527642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633986478459887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c539b64c-ccff-4582-821d-ff5d98b9787b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.479520701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7006fd62-ffb2-4b96-aaa9-8f561d2b4ce6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.479597484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7006fd62-ffb2-4b96-aaa9-8f561d2b4ce6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.480452143Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cbad5ed611c0eba69f86bcf0ab8537b10e2e840003883ff68795ab32e27c7e8,PodSandboxId:090136b8086e52d18ddfd121f4aad5a8f34828376447c06ae252035542f4a57c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744633914404854095,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-chgtk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 99f2e74c-7f7a-4849-a298-3eccf7ce50ba,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4589cfa0059b8034d234e2f1aab9c1a6fa015423a4a6a89e481feb427bb248,PodSandboxId:a86aff1655956d9b0c55d74f060ba4d26d8dc2ddd9164adbb1e2b6f2c878539a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744633908724732398,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-z4vd8,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4cd0930a-ab95-47c9-b854-60d4d02c1899,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d267022129b9774026c65c59fbef8a8695d761396266debf5a4d3705b0c40ac5,PodSandboxId:bbc7a9734e4a5f79f6e3f4672572c3e1066f6e008a76b85a14d327ff59cc9944,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744633874560295396,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb6157d-76c5-48e2-a0f6-eb478bf84611,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25396a72d045eade6f158fec8ee8e450282199e20553060df500f6d07301a5,PodSandboxId:fdca15f19904fb16e8641ed654773e98dcb64355ed9838291b4ecaf03a751fbb,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833330170913,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-m9jsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78c2bfff-6f14-4194-b794-ee10b06219da,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d35b9603ae6ddaa2fd9425a00d44e4f06eecd3fca6a494734797b33df862a7a,PodSandboxId:7ac6321e04f9addd7e44ff6b2a6aa5a9a273619c2b8624c85326335a3d470e8c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833239696721,Labels:map[string]string{
io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-klbg6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3411c5b6-bfba-4c65-b22a-1375ad82b576,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f2e0b58d2f3d7dfd257100e3b5bfcdf2704fb442ea950363a7b76fa03164d,PodSandboxId:9f7b911af306be6bfe1580264196b8fd71cbd3c174f28aaf7520338e85ea3ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633775222416762,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d0aa2a13f3b9e963a5525a98316fdb450bdedd36f34b8807639fc299fd728e,PodSandboxId:4e1ef32351550387eb9b52365521b118d80faf9603d5cce495f2f9d6b17427a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633774670495721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e530384f37b390f123f9938b79fbf3be5157db5a590a6544ca36af743bc39b1c,PodSandboxId:dba57defd9231bb47ece00be2daed649a29c184bad1e1b79deae1065e817d43a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633774663612816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708549be1081b704ef9ac4242734030c4d88bf88d3fd1c7901014d85549b1358,PodSandboxId:da2463b9deb985b8d3a88c510b0bac83de1d809335f15dddc4058ba6e0f17012,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a1747
38baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633770973864151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2591a1078d77eeb97d04aada88e7b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d355b0cf1e9f8799ffb38b5c24435802ebadef16716cb1d1a24263db78bef5ae,PodSandboxId:7fcf91bcdaea44a5969fd7514fd4fef7013bc36e84e1bc44c15cdc8e4f0f8e4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a
9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633770870792994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde8cbf028726ed80b1bc9b94ee92811ad76b704ee8721f1458f774c86feb5e7,PodSandboxId:9c9534901c6c8c0fae018b27408c5b4b88723ea4135ce110170c2f39ee855056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff
195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633770825814387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016878b0ed0a6d4f8a7c2d3161cb1c8a23e3829a0b468ebd97f4cf25b5f23c75,PodSandboxId:c662ecf27d35bb6ba782c62cb4c8775a4dc39536a8a8055c724d48a5f20840f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633770858986624,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d59fc41d941c2eaf6cdebce8611d06daaafcb9eaafcc0cef5b05b315973518,PodSandboxId:3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a30
2a562,State:CONTAINER_EXITED,CreatedAt:1744633744246900443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08dc9296e80d47e6714ed00eb35508495621996f34120627ac6753ae4cd62,PodSandboxId:14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_
EXITED,CreatedAt:1744633732548629328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cc318416a2e93e0f486990973f00fbe7eadce25e6d4c5a9df4b5864b134eb,PodSandboxId:47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744633732511353905,L
abels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0bd14547fc66bbf0334984d041096ff9389559194f95a61071dd4f76462fc4,PodSandboxId:2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bc
a912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744633727923887746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c7b28e5bcd8258b42a5bb54da8259e6937eb70a21fec15edbfde289b7d3bfb,PodSandboxId:ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247
abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744633727862506473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e58c4d42bf0be189248882fccc4f92d682942ad3be7b4c1dc9664fa95be5b42,PodSandboxId:a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744633727892806125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7006fd62-ffb2-4b96-aaa9-8f561d2b4ce6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.518937303Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bcb31b4-adf4-4279-8d67-c0447d0560bb name=/runtime.v1.RuntimeService/Version
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.519165819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bcb31b4-adf4-4279-8d67-c0447d0560bb name=/runtime.v1.RuntimeService/Version
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.520684249Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff8fb32e-9cff-4732-a645-486e8fdc10d5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.521592781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633986521556363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff8fb32e-9cff-4732-a645-486e8fdc10d5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.522433358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d93f89f-669e-480c-bdb8-526fed6ed06b name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.522500897Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d93f89f-669e-480c-bdb8-526fed6ed06b name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.522887203Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cbad5ed611c0eba69f86bcf0ab8537b10e2e840003883ff68795ab32e27c7e8,PodSandboxId:090136b8086e52d18ddfd121f4aad5a8f34828376447c06ae252035542f4a57c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744633914404854095,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-chgtk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 99f2e74c-7f7a-4849-a298-3eccf7ce50ba,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4589cfa0059b8034d234e2f1aab9c1a6fa015423a4a6a89e481feb427bb248,PodSandboxId:a86aff1655956d9b0c55d74f060ba4d26d8dc2ddd9164adbb1e2b6f2c878539a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744633908724732398,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-z4vd8,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4cd0930a-ab95-47c9-b854-60d4d02c1899,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d267022129b9774026c65c59fbef8a8695d761396266debf5a4d3705b0c40ac5,PodSandboxId:bbc7a9734e4a5f79f6e3f4672572c3e1066f6e008a76b85a14d327ff59cc9944,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744633874560295396,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb6157d-76c5-48e2-a0f6-eb478bf84611,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25396a72d045eade6f158fec8ee8e450282199e20553060df500f6d07301a5,PodSandboxId:fdca15f19904fb16e8641ed654773e98dcb64355ed9838291b4ecaf03a751fbb,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833330170913,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-m9jsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78c2bfff-6f14-4194-b794-ee10b06219da,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d35b9603ae6ddaa2fd9425a00d44e4f06eecd3fca6a494734797b33df862a7a,PodSandboxId:7ac6321e04f9addd7e44ff6b2a6aa5a9a273619c2b8624c85326335a3d470e8c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833239696721,Labels:map[string]string{
io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-klbg6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3411c5b6-bfba-4c65-b22a-1375ad82b576,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f2e0b58d2f3d7dfd257100e3b5bfcdf2704fb442ea950363a7b76fa03164d,PodSandboxId:9f7b911af306be6bfe1580264196b8fd71cbd3c174f28aaf7520338e85ea3ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633775222416762,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d0aa2a13f3b9e963a5525a98316fdb450bdedd36f34b8807639fc299fd728e,PodSandboxId:4e1ef32351550387eb9b52365521b118d80faf9603d5cce495f2f9d6b17427a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633774670495721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e530384f37b390f123f9938b79fbf3be5157db5a590a6544ca36af743bc39b1c,PodSandboxId:dba57defd9231bb47ece00be2daed649a29c184bad1e1b79deae1065e817d43a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633774663612816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708549be1081b704ef9ac4242734030c4d88bf88d3fd1c7901014d85549b1358,PodSandboxId:da2463b9deb985b8d3a88c510b0bac83de1d809335f15dddc4058ba6e0f17012,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a1747
38baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633770973864151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2591a1078d77eeb97d04aada88e7b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d355b0cf1e9f8799ffb38b5c24435802ebadef16716cb1d1a24263db78bef5ae,PodSandboxId:7fcf91bcdaea44a5969fd7514fd4fef7013bc36e84e1bc44c15cdc8e4f0f8e4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a
9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633770870792994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde8cbf028726ed80b1bc9b94ee92811ad76b704ee8721f1458f774c86feb5e7,PodSandboxId:9c9534901c6c8c0fae018b27408c5b4b88723ea4135ce110170c2f39ee855056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff
195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633770825814387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016878b0ed0a6d4f8a7c2d3161cb1c8a23e3829a0b468ebd97f4cf25b5f23c75,PodSandboxId:c662ecf27d35bb6ba782c62cb4c8775a4dc39536a8a8055c724d48a5f20840f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633770858986624,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d59fc41d941c2eaf6cdebce8611d06daaafcb9eaafcc0cef5b05b315973518,PodSandboxId:3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a30
2a562,State:CONTAINER_EXITED,CreatedAt:1744633744246900443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08dc9296e80d47e6714ed00eb35508495621996f34120627ac6753ae4cd62,PodSandboxId:14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_
EXITED,CreatedAt:1744633732548629328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cc318416a2e93e0f486990973f00fbe7eadce25e6d4c5a9df4b5864b134eb,PodSandboxId:47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744633732511353905,L
abels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0bd14547fc66bbf0334984d041096ff9389559194f95a61071dd4f76462fc4,PodSandboxId:2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bc
a912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744633727923887746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c7b28e5bcd8258b42a5bb54da8259e6937eb70a21fec15edbfde289b7d3bfb,PodSandboxId:ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247
abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744633727862506473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e58c4d42bf0be189248882fccc4f92d682942ad3be7b4c1dc9664fa95be5b42,PodSandboxId:a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744633727892806125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d93f89f-669e-480c-bdb8-526fed6ed06b name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.562148158Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f487ef24-c613-42eb-93a1-e7c6d0be7d00 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.562592672Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:201bbff8b1ee055c592254a932795e744d01f8ae1fc19fe0e20ead213a29b03f,Metadata:&PodSandboxMetadata{Name:mysql-58ccfd96bb-mdpt4,Uid:7c8c4c0e-fc92-4dde-933d-2daf5e4c8526,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744633887669670104,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-58ccfd96bb-mdpt4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c8c4c0e-fc92-4dde-933d-2daf5e4c8526,pod-template-hash: 58ccfd96bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T12:31:27.353368428Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:090136b8086e52d18ddfd121f4aad5a8f34828376447c06ae252035542f4a57c,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-chgtk,Uid:99f2e74c-7f7a-4849-a298-3eccf7ce50ba,Namespace:kubernetes-d
ashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744633846339270993,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-chgtk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 99f2e74c-7f7a-4849-a298-3eccf7ce50ba,k8s-app: kubernetes-dashboard,pod-template-hash: 7779f9b69b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T12:30:45.113836469Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a86aff1655956d9b0c55d74f060ba4d26d8dc2ddd9164adbb1e2b6f2c878539a,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-5d59dccf9b-z4vd8,Uid:4cd0930a-ab95-47c9-b854-60d4d02c1899,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744633846314008891,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-z4vd8,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4c
d0930a-ab95-47c9-b854-60d4d02c1899,k8s-app: dashboard-metrics-scraper,pod-template-hash: 5d59dccf9b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T12:30:45.104503133Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:bbc7a9734e4a5f79f6e3f4672572c3e1066f6e008a76b85a14d327ff59cc9944,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:7cb6157d-76c5-48e2-a0f6-eb478bf84611,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1744633844100541874,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb6157d-76c5-48e2-a0f6-eb478bf84611,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T12:30:43.791072053Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d1bae7c23d3ebddcd4aac56ea3efdb4ab92a88dc88142816739369597afb214d,Metadata:&PodSandboxMe
tadata{Name:sp-pod,Uid:043b9cf5-ee9c-4156-8778-2420e4c78ede,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744633805530934260,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 043b9cf5-ee9c-4156-8778-2420e4c78ede,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"docker.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-04-14T12:30:05.223294337Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fdca15f19904fb16e8641ed654773e98dcb64355ed9838291b4eca
f03a751fbb,Metadata:&PodSandboxMetadata{Name:hello-node-fcfd88b6f-m9jsj,Uid:78c2bfff-6f14-4194-b794-ee10b06219da,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744633798577376712,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-fcfd88b6f-m9jsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78c2bfff-6f14-4194-b794-ee10b06219da,pod-template-hash: fcfd88b6f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T12:29:58.266902092Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ac6321e04f9addd7e44ff6b2a6aa5a9a273619c2b8624c85326335a3d470e8c,Metadata:&PodSandboxMetadata{Name:hello-node-connect-58f9cf68d8-klbg6,Uid:3411c5b6-bfba-4c65-b22a-1375ad82b576,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744633798110805679,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-klbg6,io.kubernetes.pod.name
space: default,io.kubernetes.pod.uid: 3411c5b6-bfba-4c65-b22a-1375ad82b576,pod-template-hash: 58f9cf68d8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T12:29:57.502798165Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f555f7ee57d6b098771b412cbde86cdb4bb496c75dc60b3d4580b058d639b1a0,Metadata:&PodSandboxMetadata{Name:nginx-svc,Uid:7847bae9-503c-4fa5-8c68-1b0ad432e4a7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744633797695689515,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7847bae9-503c-4fa5-8c68-1b0ad432e4a7,run: nginx-svc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\":\"nginx-svc\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"docker.io/nginx:alpine\",\"name\":\"nginx\",\"p
orts\":[{\"containerPort\":80,\"protocol\":\"TCP\"}]}]}}\n,kubernetes.io/config.seen: 2025-04-14T12:29:57.385349331Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f7b911af306be6bfe1580264196b8fd71cbd3c174f28aaf7520338e85ea3ece,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-h7t9g,Uid:3d520f2d-d301-4af3-ad63-624916ce0305,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1744633774771386515,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T12:29:34.319877134Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:da2463b9deb985b8d3a88c510b0bac83de1d809335f15dddc4058ba6e0f17012,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-760045,Uid:1e2591a1078d77eeb97d04aada88e7b4,Namesp
ace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744633770803987083,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2591a1078d77eeb97d04aada88e7b4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.48:8441,kubernetes.io/config.hash: 1e2591a1078d77eeb97d04aada88e7b4,kubernetes.io/config.seen: 2025-04-14T12:29:30.320433244Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9c9534901c6c8c0fae018b27408c5b4b88723ea4135ce110170c2f39ee855056,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-760045,Uid:71e49f19f008c8999723c33183b5ea26,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1744633768720282555,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube
-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 71e49f19f008c8999723c33183b5ea26,kubernetes.io/config.seen: 2025-04-14T12:28:47.190776473Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4e1ef32351550387eb9b52365521b118d80faf9603d5cce495f2f9d6b17427a1,Metadata:&PodSandboxMetadata{Name:kube-proxy-vf98w,Uid:480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1744633768701821724,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T12:28:52.187608002Z,kubernetes.io/config.source: api,}
,RuntimeHandler:,},&PodSandbox{Id:7fcf91bcdaea44a5969fd7514fd4fef7013bc36e84e1bc44c15cdc8e4f0f8e4c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-760045,Uid:7adf4ebe949d159f6be06adfa228b5a9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1744633768665448540,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7adf4ebe949d159f6be06adfa228b5a9,kubernetes.io/config.seen: 2025-04-14T12:28:47.190777406Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dba57defd9231bb47ece00be2daed649a29c184bad1e1b79deae1065e817d43a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b300b501-5ebd-47df-a358-754ea70df398,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1744633768661749209,Labels:map[stri
ng]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes
.io/config.seen: 2025-04-14T12:28:52.187621021Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c662ecf27d35bb6ba782c62cb4c8775a4dc39536a8a8055c724d48a5f20840f6,Metadata:&PodSandboxMetadata{Name:etcd-functional-760045,Uid:1afd23a171634d4b565b0afc9c52f067,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1744633768556413272,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.48:2379,kubernetes.io/config.hash: 1afd23a171634d4b565b0afc9c52f067,kubernetes.io/config.seen: 2025-04-14T12:28:47.190771832Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-h7t9g,Uid:3d
520f2d-d301-4af3-ad63-624916ce0305,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1744633703715160507,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T12:28:06.224468101Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-760045,Uid:7adf4ebe949d159f6be06adfa228b5a9,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1744633703588408902,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa
228b5a9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7adf4ebe949d159f6be06adfa228b5a9,kubernetes.io/config.seen: 2025-04-14T12:28:01.244549342Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af,Metadata:&PodSandboxMetadata{Name:etcd-functional-760045,Uid:1afd23a171634d4b565b0afc9c52f067,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1744633703524284931,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.48:2379,kubernetes.io/config.hash: 1afd23a171634d4b565b0afc9c52f067,kubernetes.io/config.seen: 2025-04-14T12:28:01.244538141Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox
{Id:2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-760045,Uid:71e49f19f008c8999723c33183b5ea26,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1744633703523539161,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 71e49f19f008c8999723c33183b5ea26,kubernetes.io/config.seen: 2025-04-14T12:28:01.244547823Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b300b501-5ebd-47df-a358-754ea70df398,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1744633703510507696,Labels:map[s
tring]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kuberne
tes.io/config.seen: 2025-04-14T12:28:07.176515371Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e,Metadata:&PodSandboxMetadata{Name:kube-proxy-vf98w,Uid:480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1744633703488617511,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-14T12:28:05.974643937Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f487ef24-c613-42eb-93a1-e7c6d0be7d00 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.563572388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19f393f4-510d-4bd0-920f-2d9652bf4f4a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.563634570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19f393f4-510d-4bd0-920f-2d9652bf4f4a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.563998987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cbad5ed611c0eba69f86bcf0ab8537b10e2e840003883ff68795ab32e27c7e8,PodSandboxId:090136b8086e52d18ddfd121f4aad5a8f34828376447c06ae252035542f4a57c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744633914404854095,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-chgtk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 99f2e74c-7f7a-4849-a298-3eccf7ce50ba,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4589cfa0059b8034d234e2f1aab9c1a6fa015423a4a6a89e481feb427bb248,PodSandboxId:a86aff1655956d9b0c55d74f060ba4d26d8dc2ddd9164adbb1e2b6f2c878539a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744633908724732398,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-z4vd8,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4cd0930a-ab95-47c9-b854-60d4d02c1899,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d267022129b9774026c65c59fbef8a8695d761396266debf5a4d3705b0c40ac5,PodSandboxId:bbc7a9734e4a5f79f6e3f4672572c3e1066f6e008a76b85a14d327ff59cc9944,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744633874560295396,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb6157d-76c5-48e2-a0f6-eb478bf84611,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25396a72d045eade6f158fec8ee8e450282199e20553060df500f6d07301a5,PodSandboxId:fdca15f19904fb16e8641ed654773e98dcb64355ed9838291b4ecaf03a751fbb,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833330170913,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-m9jsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78c2bfff-6f14-4194-b794-ee10b06219da,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d35b9603ae6ddaa2fd9425a00d44e4f06eecd3fca6a494734797b33df862a7a,PodSandboxId:7ac6321e04f9addd7e44ff6b2a6aa5a9a273619c2b8624c85326335a3d470e8c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833239696721,Labels:map[string]string{
io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-klbg6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3411c5b6-bfba-4c65-b22a-1375ad82b576,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f2e0b58d2f3d7dfd257100e3b5bfcdf2704fb442ea950363a7b76fa03164d,PodSandboxId:9f7b911af306be6bfe1580264196b8fd71cbd3c174f28aaf7520338e85ea3ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633775222416762,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d0aa2a13f3b9e963a5525a98316fdb450bdedd36f34b8807639fc299fd728e,PodSandboxId:4e1ef32351550387eb9b52365521b118d80faf9603d5cce495f2f9d6b17427a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633774670495721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e530384f37b390f123f9938b79fbf3be5157db5a590a6544ca36af743bc39b1c,PodSandboxId:dba57defd9231bb47ece00be2daed649a29c184bad1e1b79deae1065e817d43a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633774663612816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708549be1081b704ef9ac4242734030c4d88bf88d3fd1c7901014d85549b1358,PodSandboxId:da2463b9deb985b8d3a88c510b0bac83de1d809335f15dddc4058ba6e0f17012,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a1747
38baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633770973864151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2591a1078d77eeb97d04aada88e7b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d355b0cf1e9f8799ffb38b5c24435802ebadef16716cb1d1a24263db78bef5ae,PodSandboxId:7fcf91bcdaea44a5969fd7514fd4fef7013bc36e84e1bc44c15cdc8e4f0f8e4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a
9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633770870792994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde8cbf028726ed80b1bc9b94ee92811ad76b704ee8721f1458f774c86feb5e7,PodSandboxId:9c9534901c6c8c0fae018b27408c5b4b88723ea4135ce110170c2f39ee855056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff
195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633770825814387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016878b0ed0a6d4f8a7c2d3161cb1c8a23e3829a0b468ebd97f4cf25b5f23c75,PodSandboxId:c662ecf27d35bb6ba782c62cb4c8775a4dc39536a8a8055c724d48a5f20840f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633770858986624,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d59fc41d941c2eaf6cdebce8611d06daaafcb9eaafcc0cef5b05b315973518,PodSandboxId:3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a30
2a562,State:CONTAINER_EXITED,CreatedAt:1744633744246900443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08dc9296e80d47e6714ed00eb35508495621996f34120627ac6753ae4cd62,PodSandboxId:14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_
EXITED,CreatedAt:1744633732548629328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cc318416a2e93e0f486990973f00fbe7eadce25e6d4c5a9df4b5864b134eb,PodSandboxId:47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744633732511353905,L
abels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0bd14547fc66bbf0334984d041096ff9389559194f95a61071dd4f76462fc4,PodSandboxId:2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bc
a912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744633727923887746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c7b28e5bcd8258b42a5bb54da8259e6937eb70a21fec15edbfde289b7d3bfb,PodSandboxId:ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247
abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744633727862506473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e58c4d42bf0be189248882fccc4f92d682942ad3be7b4c1dc9664fa95be5b42,PodSandboxId:a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744633727892806125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19f393f4-510d-4bd0-920f-2d9652bf4f4a name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.569192866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5fc0ed9-1081-4699-9586-ee80c9442848 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.569264737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5fc0ed9-1081-4699-9586-ee80c9442848 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.570520443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=282c730a-9425-43b4-9cda-d2e1bdab138e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.571270726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633986571209297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=282c730a-9425-43b4-9cda-d2e1bdab138e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.571994046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ffa9e91a-3ba1-4b98-bf9f-54da6feb3333 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.572089096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ffa9e91a-3ba1-4b98-bf9f-54da6feb3333 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:33:06 functional-760045 crio[5227]: time="2025-04-14 12:33:06.572607457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cbad5ed611c0eba69f86bcf0ab8537b10e2e840003883ff68795ab32e27c7e8,PodSandboxId:090136b8086e52d18ddfd121f4aad5a8f34828376447c06ae252035542f4a57c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744633914404854095,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-chgtk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 99f2e74c-7f7a-4849-a298-3eccf7ce50ba,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4589cfa0059b8034d234e2f1aab9c1a6fa015423a4a6a89e481feb427bb248,PodSandboxId:a86aff1655956d9b0c55d74f060ba4d26d8dc2ddd9164adbb1e2b6f2c878539a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744633908724732398,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-z4vd8,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4cd0930a-ab95-47c9-b854-60d4d02c1899,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d267022129b9774026c65c59fbef8a8695d761396266debf5a4d3705b0c40ac5,PodSandboxId:bbc7a9734e4a5f79f6e3f4672572c3e1066f6e008a76b85a14d327ff59cc9944,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744633874560295396,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb6157d-76c5-48e2-a0f6-eb478bf84611,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25396a72d045eade6f158fec8ee8e450282199e20553060df500f6d07301a5,PodSandboxId:fdca15f19904fb16e8641ed654773e98dcb64355ed9838291b4ecaf03a751fbb,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833330170913,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-m9jsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78c2bfff-6f14-4194-b794-ee10b06219da,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d35b9603ae6ddaa2fd9425a00d44e4f06eecd3fca6a494734797b33df862a7a,PodSandboxId:7ac6321e04f9addd7e44ff6b2a6aa5a9a273619c2b8624c85326335a3d470e8c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833239696721,Labels:map[string]string{
io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-klbg6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3411c5b6-bfba-4c65-b22a-1375ad82b576,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f2e0b58d2f3d7dfd257100e3b5bfcdf2704fb442ea950363a7b76fa03164d,PodSandboxId:9f7b911af306be6bfe1580264196b8fd71cbd3c174f28aaf7520338e85ea3ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633775222416762,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d0aa2a13f3b9e963a5525a98316fdb450bdedd36f34b8807639fc299fd728e,PodSandboxId:4e1ef32351550387eb9b52365521b118d80faf9603d5cce495f2f9d6b17427a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633774670495721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e530384f37b390f123f9938b79fbf3be5157db5a590a6544ca36af743bc39b1c,PodSandboxId:dba57defd9231bb47ece00be2daed649a29c184bad1e1b79deae1065e817d43a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633774663612816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708549be1081b704ef9ac4242734030c4d88bf88d3fd1c7901014d85549b1358,PodSandboxId:da2463b9deb985b8d3a88c510b0bac83de1d809335f15dddc4058ba6e0f17012,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a1747
38baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633770973864151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2591a1078d77eeb97d04aada88e7b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d355b0cf1e9f8799ffb38b5c24435802ebadef16716cb1d1a24263db78bef5ae,PodSandboxId:7fcf91bcdaea44a5969fd7514fd4fef7013bc36e84e1bc44c15cdc8e4f0f8e4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a
9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633770870792994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde8cbf028726ed80b1bc9b94ee92811ad76b704ee8721f1458f774c86feb5e7,PodSandboxId:9c9534901c6c8c0fae018b27408c5b4b88723ea4135ce110170c2f39ee855056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff
195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633770825814387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016878b0ed0a6d4f8a7c2d3161cb1c8a23e3829a0b468ebd97f4cf25b5f23c75,PodSandboxId:c662ecf27d35bb6ba782c62cb4c8775a4dc39536a8a8055c724d48a5f20840f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633770858986624,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d59fc41d941c2eaf6cdebce8611d06daaafcb9eaafcc0cef5b05b315973518,PodSandboxId:3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a30
2a562,State:CONTAINER_EXITED,CreatedAt:1744633744246900443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08dc9296e80d47e6714ed00eb35508495621996f34120627ac6753ae4cd62,PodSandboxId:14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_
EXITED,CreatedAt:1744633732548629328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cc318416a2e93e0f486990973f00fbe7eadce25e6d4c5a9df4b5864b134eb,PodSandboxId:47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744633732511353905,L
abels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0bd14547fc66bbf0334984d041096ff9389559194f95a61071dd4f76462fc4,PodSandboxId:2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bc
a912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744633727923887746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c7b28e5bcd8258b42a5bb54da8259e6937eb70a21fec15edbfde289b7d3bfb,PodSandboxId:ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247
abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744633727862506473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e58c4d42bf0be189248882fccc4f92d682942ad3be7b4c1dc9664fa95be5b42,PodSandboxId:a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744633727892806125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ffa9e91a-3ba1-4b98-bf9f-54da6feb3333 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	3cbad5ed611c0       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   090136b8086e5       kubernetes-dashboard-7779f9b69b-chgtk
	ff4589cfa0059       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   About a minute ago   Running             dashboard-metrics-scraper   0                   a86aff1655956       dashboard-metrics-scraper-5d59dccf9b-z4vd8
	d267022129b97       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              About a minute ago   Exited              mount-munger                0                   bbc7a9734e4a5       busybox-mount
	ef25396a72d04       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago        Running             echoserver                  0                   fdca15f19904f       hello-node-fcfd88b6f-m9jsj
	3d35b9603ae6d       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago        Running             echoserver                  0                   7ac6321e04f9a       hello-node-connect-58f9cf68d8-klbg6
	747f2e0b58d2f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago        Running             coredns                     3                   9f7b911af306b       coredns-668d6bf9bc-h7t9g
	39d0aa2a13f3b       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                 3 minutes ago        Running             kube-proxy                  3                   4e1ef32351550       kube-proxy-vf98w
	e530384f37b39       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Running             storage-provisioner         5                   dba57defd9231       storage-provisioner
	708549be1081b       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                 3 minutes ago        Running             kube-apiserver              0                   da2463b9deb98       kube-apiserver-functional-760045
	d355b0cf1e9f8       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                 3 minutes ago        Running             kube-scheduler              3                   7fcf91bcdaea4       kube-scheduler-functional-760045
	016878b0ed0a6       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 3 minutes ago        Running             etcd                        3                   c662ecf27d35b       etcd-functional-760045
	cde8cbf028726       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                 3 minutes ago        Running             kube-controller-manager     3                   9c9534901c6c8       kube-controller-manager-functional-760045
	55d59fc41d941       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago        Exited              storage-provisioner         4                   3b40dcff38bcb       storage-provisioner
	dab08dc9296e8       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                 4 minutes ago        Exited              kube-proxy                  2                   14e3608834252       kube-proxy-vf98w
	b43cc318416a2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago        Exited              coredns                     2                   47de29e77014f       coredns-668d6bf9bc-h7t9g
	0f0bd14547fc6       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                 4 minutes ago        Exited              kube-controller-manager     2                   2b2b899806d88       kube-controller-manager-functional-760045
	2e58c4d42bf0b       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                 4 minutes ago        Exited              kube-scheduler              2                   a702732cd9018       kube-scheduler-functional-760045
	f9c7b28e5bcd8       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 4 minutes ago        Exited              etcd                        2                   ba5296a9c0527       etcd-functional-760045
	
	
	==> coredns [747f2e0b58d2f3d7dfd257100e3b5bfcdf2704fb442ea950363a7b76fa03164d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50058 - 7048 "HINFO IN 1014515586151066443.8180690758486996107. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020501932s
	
	
	==> coredns [b43cc318416a2e93e0f486990973f00fbe7eadce25e6d4c5a9df4b5864b134eb] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47029 - 31991 "HINFO IN 3327528597732309114.3443458371943165367. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021934573s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-760045
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-760045
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9d10b41d083ee2064b1b8c7e16503e13b1847696
	                    minikube.k8s.io/name=functional-760045
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T12_28_01_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 12:27:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-760045
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 12:32:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 12:32:06 +0000   Mon, 14 Apr 2025 12:27:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 12:32:06 +0000   Mon, 14 Apr 2025 12:27:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 12:32:06 +0000   Mon, 14 Apr 2025 12:27:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 12:32:06 +0000   Mon, 14 Apr 2025 12:28:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    functional-760045
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d8f62d93b6f435eb1742e0f1657f644
	  System UUID:                6d8f62d9-3b6f-435e-b174-2e0f1657f644
	  Boot ID:                    63c61889-dd95-4e57-b298-791ea61155d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-klbg6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     hello-node-fcfd88b6f-m9jsj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     mysql-58ccfd96bb-mdpt4                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    99s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 coredns-668d6bf9bc-h7t9g                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m
	  kube-system                 etcd-functional-760045                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m5s
	  kube-system                 kube-apiserver-functional-760045              250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 kube-controller-manager-functional-760045     200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-proxy-vf98w                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-functional-760045              100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-z4vd8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-chgtk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m31s                  kube-proxy       
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  Starting                 4m59s                  kube-proxy       
	  Normal  Starting                 4m38s                  kube-proxy       
	  Normal  Starting                 5m5s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m5s                   kubelet          Node functional-760045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s                   kubelet          Node functional-760045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s                   kubelet          Node functional-760045 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m4s                   kubelet          Node functional-760045 status is now: NodeReady
	  Normal  RegisteredNode           5m2s                   node-controller  Node functional-760045 event: Registered Node functional-760045 in Controller
	  Normal  RegisteredNode           4m36s                  node-controller  Node functional-760045 event: Registered Node functional-760045 in Controller
	  Normal  Starting                 4m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node functional-760045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node functional-760045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node functional-760045 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node functional-760045 event: Registered Node functional-760045 in Controller
	  Normal  Starting                 3m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m36s (x8 over 3m36s)  kubelet          Node functional-760045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m36s (x8 over 3m36s)  kubelet          Node functional-760045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m36s (x7 over 3m36s)  kubelet          Node functional-760045 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m30s                  node-controller  Node functional-760045 event: Registered Node functional-760045 in Controller
	
	
	==> dmesg <==
	[  +4.080254] kauditd_printk_skb: 213 callbacks suppressed
	[ +18.256468] systemd-fstab-generator[3734]: Ignoring "noauto" option for root device
	[  +0.361031] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.283319] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.741119] kauditd_printk_skb: 8 callbacks suppressed
	[Apr14 12:29] systemd-fstab-generator[4299]: Ignoring "noauto" option for root device
	[ +18.245002] systemd-fstab-generator[5152]: Ignoring "noauto" option for root device
	[  +0.081583] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.061415] systemd-fstab-generator[5164]: Ignoring "noauto" option for root device
	[  +0.176463] systemd-fstab-generator[5178]: Ignoring "noauto" option for root device
	[  +0.150757] systemd-fstab-generator[5190]: Ignoring "noauto" option for root device
	[  +0.294243] systemd-fstab-generator[5218]: Ignoring "noauto" option for root device
	[  +0.798939] systemd-fstab-generator[5341]: Ignoring "noauto" option for root device
	[  +2.340273] systemd-fstab-generator[5704]: Ignoring "noauto" option for root device
	[  +4.321732] kauditd_printk_skb: 210 callbacks suppressed
	[ +11.542173] systemd-fstab-generator[6401]: Ignoring "noauto" option for root device
	[  +0.090041] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.489959] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.142503] kauditd_printk_skb: 21 callbacks suppressed
	[Apr14 12:30] kauditd_printk_skb: 32 callbacks suppressed
	[ +25.342906] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.225639] kauditd_printk_skb: 6 callbacks suppressed
	[Apr14 12:31] kauditd_printk_skb: 32 callbacks suppressed
	[ +12.682867] kauditd_printk_skb: 3 callbacks suppressed
	[ +21.468175] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [016878b0ed0a6d4f8a7c2d3161cb1c8a23e3829a0b468ebd97f4cf25b5f23c75] <==
	{"level":"info","ts":"2025-04-14T12:29:32.178889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became pre-candidate at term 4"}
	{"level":"info","ts":"2025-04-14T12:29:32.178923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 received MsgPreVoteResp from 7a50af7ffd27cbe1 at term 4"}
	{"level":"info","ts":"2025-04-14T12:29:32.178936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became candidate at term 5"}
	{"level":"info","ts":"2025-04-14T12:29:32.178945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 received MsgVoteResp from 7a50af7ffd27cbe1 at term 5"}
	{"level":"info","ts":"2025-04-14T12:29:32.178953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became leader at term 5"}
	{"level":"info","ts":"2025-04-14T12:29:32.178983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7a50af7ffd27cbe1 elected leader 7a50af7ffd27cbe1 at term 5"}
	{"level":"info","ts":"2025-04-14T12:29:32.186114Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"7a50af7ffd27cbe1","local-member-attributes":"{Name:functional-760045 ClientURLs:[https://192.168.39.48:2379]}","request-path":"/0/members/7a50af7ffd27cbe1/attributes","cluster-id":"59383b002ca7add2","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T12:29:32.186405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:29:32.186482Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:29:32.187695Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:29:32.187842Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T12:29:32.187876Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T12:29:32.188399Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:29:32.190772Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T12:29:32.189004Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.48:2379"}
	{"level":"info","ts":"2025-04-14T12:30:34.121767Z","caller":"traceutil/trace.go:171","msg":"trace[395416169] transaction","detail":"{read_only:false; response_revision:821; number_of_response:1; }","duration":"222.293458ms","start":"2025-04-14T12:30:33.899457Z","end":"2025-04-14T12:30:34.121751Z","steps":["trace[395416169] 'process raft request'  (duration: 133.830127ms)","trace[395416169] 'compare'  (duration: 88.391535ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T12:31:53.484246Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.571986ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:31:53.484355Z","caller":"traceutil/trace.go:171","msg":"trace[782028940] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:995; }","duration":"263.719371ms","start":"2025-04-14T12:31:53.220624Z","end":"2025-04-14T12:31:53.484344Z","steps":["trace[782028940] 'range keys from in-memory index tree'  (duration: 263.52016ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:31:53.484364Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.941498ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:31:53.484405Z","caller":"traceutil/trace.go:171","msg":"trace[1997673181] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:995; }","duration":"243.006878ms","start":"2025-04-14T12:31:53.241386Z","end":"2025-04-14T12:31:53.484393Z","steps":["trace[1997673181] 'range keys from in-memory index tree'  (duration: 242.892554ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:31:53.484557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.182307ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-04-14T12:31:53.484565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"357.840646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:31:53.484573Z","caller":"traceutil/trace.go:171","msg":"trace[970696691] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:995; }","duration":"164.218942ms","start":"2025-04-14T12:31:53.320349Z","end":"2025-04-14T12:31:53.484567Z","steps":["trace[970696691] 'range keys from in-memory index tree'  (duration: 164.143609ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T12:31:53.484581Z","caller":"traceutil/trace.go:171","msg":"trace[532552859] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:995; }","duration":"357.881756ms","start":"2025-04-14T12:31:53.126694Z","end":"2025-04-14T12:31:53.484576Z","steps":["trace[532552859] 'range keys from in-memory index tree'  (duration: 357.752041ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:31:53.484596Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T12:31:53.126679Z","time spent":"357.912218ms","remote":"127.0.0.1:51382","response type":"/etcdserverpb.KV/Range","request count":0,"request size":28,"response count":0,"response size":27,"request content":"key:\"/registry/services/specs\" limit:1 "}
	
	
	==> etcd [f9c7b28e5bcd8258b42a5bb54da8259e6937eb70a21fec15edbfde289b7d3bfb] <==
	{"level":"info","ts":"2025-04-14T12:28:50.108966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-04-14T12:28:50.109052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 received MsgPreVoteResp from 7a50af7ffd27cbe1 at term 3"}
	{"level":"info","ts":"2025-04-14T12:28:50.109086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became candidate at term 4"}
	{"level":"info","ts":"2025-04-14T12:28:50.109110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 received MsgVoteResp from 7a50af7ffd27cbe1 at term 4"}
	{"level":"info","ts":"2025-04-14T12:28:50.109130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became leader at term 4"}
	{"level":"info","ts":"2025-04-14T12:28:50.109149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7a50af7ffd27cbe1 elected leader 7a50af7ffd27cbe1 at term 4"}
	{"level":"info","ts":"2025-04-14T12:28:50.115593Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"7a50af7ffd27cbe1","local-member-attributes":"{Name:functional-760045 ClientURLs:[https://192.168.39.48:2379]}","request-path":"/0/members/7a50af7ffd27cbe1/attributes","cluster-id":"59383b002ca7add2","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T12:28:50.115714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:28:50.116184Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:28:50.116641Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:28:50.116745Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:28:50.117373Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T12:28:50.117476Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.48:2379"}
	{"level":"info","ts":"2025-04-14T12:28:50.117564Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T12:28:50.117590Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T12:29:20.039767Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-14T12:29:20.039832Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-760045","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.48:2380"],"advertise-client-urls":["https://192.168.39.48:2379"]}
	{"level":"warn","ts":"2025-04-14T12:29:20.039905Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T12:29:20.039981Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T12:29:20.123243Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.48:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T12:29:20.123302Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.48:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-14T12:29:20.123351Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7a50af7ffd27cbe1","current-leader-member-id":"7a50af7ffd27cbe1"}
	{"level":"info","ts":"2025-04-14T12:29:20.126701Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.48:2380"}
	{"level":"info","ts":"2025-04-14T12:29:20.126896Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.48:2380"}
	{"level":"info","ts":"2025-04-14T12:29:20.126945Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-760045","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.48:2380"],"advertise-client-urls":["https://192.168.39.48:2379"]}
	
	
	==> kernel <==
	 12:33:07 up 5 min,  0 users,  load average: 0.40, 0.39, 0.19
	Linux functional-760045 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [708549be1081b704ef9ac4242734030c4d88bf88d3fd1c7901014d85549b1358] <==
	I0414 12:29:33.507057       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0414 12:29:33.508698       1 aggregator.go:171] initial CRD sync complete...
	I0414 12:29:33.508802       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 12:29:33.508828       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 12:29:33.508919       1 cache.go:39] Caches are synced for autoregister controller
	I0414 12:29:33.514741       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0414 12:29:33.531379       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0414 12:29:33.533500       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0414 12:29:34.300400       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 12:29:34.407392       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 12:29:35.174004       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 12:29:35.240739       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 12:29:35.294008       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 12:29:35.309806       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 12:29:36.679722       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0414 12:29:36.976444       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 12:29:37.026960       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 12:29:52.550376       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.194.213"}
	I0414 12:29:57.424388       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.129.27"}
	I0414 12:29:57.609912       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.255.180"}
	I0414 12:29:58.346543       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.219.196"}
	I0414 12:30:44.867853       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 12:30:45.225695       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.140.31"}
	I0414 12:30:45.266347       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.93.198"}
	I0414 12:31:27.272670       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.235.50"}
	
	
	==> kube-controller-manager [0f0bd14547fc66bbf0334984d041096ff9389559194f95a61071dd4f76462fc4] <==
	I0414 12:28:54.580446       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0414 12:28:54.580483       1 shared_informer.go:320] Caches are synced for deployment
	I0414 12:28:54.580548       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0414 12:28:54.581222       1 shared_informer.go:320] Caches are synced for stateful set
	I0414 12:28:54.586534       1 shared_informer.go:320] Caches are synced for node
	I0414 12:28:54.586760       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0414 12:28:54.586817       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0414 12:28:54.586839       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0414 12:28:54.586863       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0414 12:28:54.586992       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-760045"
	I0414 12:28:54.587356       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 12:28:54.587748       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0414 12:28:54.591565       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 12:28:54.596339       1 shared_informer.go:320] Caches are synced for service account
	I0414 12:28:54.596371       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0414 12:28:54.596641       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="92.054µs"
	I0414 12:28:54.597591       1 shared_informer.go:320] Caches are synced for TTL
	I0414 12:28:54.604896       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 12:28:54.606258       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0414 12:28:54.621744       1 shared_informer.go:320] Caches are synced for persistent volume
	I0414 12:28:54.630100       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 12:28:54.630187       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 12:28:54.630245       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 12:28:58.367923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.577461ms"
	I0414 12:28:58.368111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="110.823µs"
	
	
	==> kube-controller-manager [cde8cbf028726ed80b1bc9b94ee92811ad76b704ee8721f1458f774c86feb5e7] <==
	I0414 12:30:45.044888       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="15.343671ms"
	E0414 12:30:45.044898       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0414 12:30:45.052198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="5.840682ms"
	E0414 12:30:45.052240       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0414 12:30:45.052984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="6.918509ms"
	E0414 12:30:45.053178       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0414 12:30:45.095070       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="28.470983ms"
	I0414 12:30:45.107077       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="40.440959ms"
	I0414 12:30:45.129168       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="21.992826ms"
	I0414 12:30:45.129339       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="93.516µs"
	I0414 12:30:45.134997       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="39.842185ms"
	I0414 12:30:45.192531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="57.438781ms"
	I0414 12:30:45.192691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="62.031µs"
	I0414 12:30:45.192767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="67.109µs"
	I0414 12:31:05.000294       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-760045"
	I0414 12:31:27.358376       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="39.061865ms"
	I0414 12:31:27.368049       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="9.511341ms"
	I0414 12:31:27.368197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="94.683µs"
	I0414 12:31:27.378460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="73.367µs"
	I0414 12:31:35.349236       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-760045"
	I0414 12:31:49.219415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="14.716497ms"
	I0414 12:31:49.219499       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="40.371µs"
	I0414 12:31:55.267495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="18.593578ms"
	I0414 12:31:55.267821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="63.259µs"
	I0414 12:32:06.259174       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-760045"
	
	
	==> kube-proxy [39d0aa2a13f3b9e963a5525a98316fdb450bdedd36f34b8807639fc299fd728e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 12:29:35.076835       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 12:29:35.086849       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	E0414 12:29:35.086997       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 12:29:35.158825       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 12:29:35.158855       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 12:29:35.158878       1 server_linux.go:170] "Using iptables Proxier"
	I0414 12:29:35.162459       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 12:29:35.162762       1 server.go:497] "Version info" version="v1.32.2"
	I0414 12:29:35.162991       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:29:35.164275       1 config.go:199] "Starting service config controller"
	I0414 12:29:35.164379       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 12:29:35.164446       1 config.go:105] "Starting endpoint slice config controller"
	I0414 12:29:35.164463       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 12:29:35.164994       1 config.go:329] "Starting node config controller"
	I0414 12:29:35.165102       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 12:29:35.264544       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 12:29:35.264602       1 shared_informer.go:320] Caches are synced for service config
	I0414 12:29:35.265929       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [dab08dc9296e80d47e6714ed00eb35508495621996f34120627ac6753ae4cd62] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 12:28:52.870736       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 12:28:52.895096       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	E0414 12:28:52.895154       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 12:28:52.982547       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 12:28:52.983079       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 12:28:52.983308       1 server_linux.go:170] "Using iptables Proxier"
	I0414 12:28:52.989875       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 12:28:52.990170       1 server.go:497] "Version info" version="v1.32.2"
	I0414 12:28:52.990196       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:28:52.991741       1 config.go:199] "Starting service config controller"
	I0414 12:28:52.991790       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 12:28:52.991832       1 config.go:105] "Starting endpoint slice config controller"
	I0414 12:28:52.991836       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 12:28:52.992292       1 config.go:329] "Starting node config controller"
	I0414 12:28:52.992318       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 12:28:53.092110       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 12:28:53.092156       1 shared_informer.go:320] Caches are synced for service config
	I0414 12:28:53.092430       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2e58c4d42bf0be189248882fccc4f92d682942ad3be7b4c1dc9664fa95be5b42] <==
	I0414 12:28:48.747183       1 serving.go:386] Generated self-signed cert in-memory
	W0414 12:28:51.299622       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 12:28:51.299751       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 12:28:51.299778       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 12:28:51.299797       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 12:28:51.376394       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 12:28:51.376906       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:28:51.380553       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 12:28:51.380749       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 12:28:51.380784       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 12:28:51.385605       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 12:28:51.481305       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0414 12:29:20.039536       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d355b0cf1e9f8799ffb38b5c24435802ebadef16716cb1d1a24263db78bef5ae] <==
	I0414 12:29:31.931709       1 serving.go:386] Generated self-signed cert in-memory
	W0414 12:29:33.367647       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 12:29:33.367764       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 12:29:33.367791       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 12:29:33.367813       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 12:29:33.422199       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 12:29:33.423643       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:29:33.444389       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 12:29:33.445083       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 12:29:33.445125       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 12:29:33.449399       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 12:29:33.549960       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 12:32:20 functional-760045 kubelet[5711]: E0414 12:32:20.510463    5711 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633940506853737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:199363,},InodesUsed:&UInt64Value{Value:96,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:32:25 functional-760045 kubelet[5711]: E0414 12:32:25.802508    5711 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Apr 14 12:32:25 functional-760045 kubelet[5711]: E0414 12:32:25.802584    5711 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Apr 14 12:32:25 functional-760045 kubelet[5711]: E0414 12:32:25.802811    5711 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6bwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(043b9cf5-ee9c-4156-8778-2420e4c78ede): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 14 12:32:25 functional-760045 kubelet[5711]: E0414 12:32:25.805559    5711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="043b9cf5-ee9c-4156-8778-2420e4c78ede"
	Apr 14 12:32:30 functional-760045 kubelet[5711]: E0414 12:32:30.413485    5711 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 14 12:32:30 functional-760045 kubelet[5711]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 14 12:32:30 functional-760045 kubelet[5711]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 14 12:32:30 functional-760045 kubelet[5711]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 12:32:30 functional-760045 kubelet[5711]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 12:32:30 functional-760045 kubelet[5711]: E0414 12:32:30.459552    5711 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod3d520f2d-d301-4af3-ad63-624916ce0305/crio-47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792: Error finding container 47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792: Status 404 returned error can't find the container with id 47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792
	Apr 14 12:32:30 functional-760045 kubelet[5711]: E0414 12:32:30.459857    5711 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9/crio-14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e: Error finding container 14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e: Status 404 returned error can't find the container with id 14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e
	Apr 14 12:32:30 functional-760045 kubelet[5711]: E0414 12:32:30.460123    5711 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7adf4ebe949d159f6be06adfa228b5a9/crio-a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77: Error finding container a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77: Status 404 returned error can't find the container with id a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77
	Apr 14 12:32:30 functional-760045 kubelet[5711]: E0414 12:32:30.460520    5711 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podb300b501-5ebd-47df-a358-754ea70df398/crio-3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f: Error finding container 3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f: Status 404 returned error can't find the container with id 3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f
	Apr 14 12:32:30 functional-760045 kubelet[5711]: E0414 12:32:30.460732    5711 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod71e49f19f008c8999723c33183b5ea26/crio-2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8: Error finding container 2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8: Status 404 returned error can't find the container with id 2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8
	Apr 14 12:32:30 functional-760045 kubelet[5711]: E0414 12:32:30.460831    5711 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1afd23a171634d4b565b0afc9c52f067/crio-ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af: Error finding container ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af: Status 404 returned error can't find the container with id ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af
	Apr 14 12:32:30 functional-760045 kubelet[5711]: E0414 12:32:30.511934    5711 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633950511627043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:199363,},InodesUsed:&UInt64Value{Value:96,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:32:30 functional-760045 kubelet[5711]: E0414 12:32:30.512162    5711 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633950511627043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:199363,},InodesUsed:&UInt64Value{Value:96,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:32:40 functional-760045 kubelet[5711]: E0414 12:32:40.393802    5711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="043b9cf5-ee9c-4156-8778-2420e4c78ede"
	Apr 14 12:32:40 functional-760045 kubelet[5711]: E0414 12:32:40.513789    5711 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633960513547512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:32:40 functional-760045 kubelet[5711]: E0414 12:32:40.513824    5711 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633960513547512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:32:50 functional-760045 kubelet[5711]: E0414 12:32:50.515383    5711 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633970515139933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:32:50 functional-760045 kubelet[5711]: E0414 12:32:50.515422    5711 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633970515139933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:33:00 functional-760045 kubelet[5711]: E0414 12:33:00.525286    5711 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633980522752312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:33:00 functional-760045 kubelet[5711]: E0414 12:33:00.525331    5711 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744633980522752312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [3cbad5ed611c0eba69f86bcf0ab8537b10e2e840003883ff68795ab32e27c7e8] <==
	2025/04/14 12:31:54 Using namespace: kubernetes-dashboard
	2025/04/14 12:31:54 Using in-cluster config to connect to apiserver
	2025/04/14 12:31:54 Using secret token for csrf signing
	2025/04/14 12:31:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/04/14 12:31:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/04/14 12:31:54 Successful initial request to the apiserver, version: v1.32.2
	2025/04/14 12:31:54 Generating JWE encryption key
	2025/04/14 12:31:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/04/14 12:31:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/04/14 12:31:54 Initializing JWE encryption key from synchronized object
	2025/04/14 12:31:54 Creating in-cluster Sidecar client
	2025/04/14 12:31:54 Successful request to sidecar
	2025/04/14 12:31:54 Serving insecurely on HTTP port: 9090
	2025/04/14 12:31:54 Starting overwatch
	
	
	==> storage-provisioner [55d59fc41d941c2eaf6cdebce8611d06daaafcb9eaafcc0cef5b05b315973518] <==
	I0414 12:29:04.337505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 12:29:04.348641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 12:29:04.348702       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [e530384f37b390f123f9938b79fbf3be5157db5a590a6544ca36af743bc39b1c] <==
	I0414 12:29:34.873609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 12:29:34.882296       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 12:29:34.882911       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 12:29:52.289312       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0f6f6fc-646a-4e27-8468-c31b7be52935", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-760045_f6802bd4-0f4f-4c68-ae8e-596452fbf97f became leader
	I0414 12:29:52.290111       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 12:29:52.290297       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-760045_f6802bd4-0f4f-4c68-ae8e-596452fbf97f!
	I0414 12:29:52.391211       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-760045_f6802bd4-0f4f-4c68-ae8e-596452fbf97f!
	I0414 12:30:03.367335       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0414 12:30:03.367474       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    fb3662dc-75cd-4ede-be17-9ab3d7836fef 341 0 2025-04-14 12:28:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-04-14 12:28:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-2014da10-195c-4c51-80df-c3118a12974a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  2014da10-195c-4c51-80df-c3118a12974a 762 0 2025-04-14 12:30:03 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-04-14 12:30:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-04-14 12:30:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0414 12:30:03.368120       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-2014da10-195c-4c51-80df-c3118a12974a" provisioned
	I0414 12:30:03.368173       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0414 12:30:03.368191       1 volume_store.go:212] Trying to save persistentvolume "pvc-2014da10-195c-4c51-80df-c3118a12974a"
	I0414 12:30:03.369254       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2014da10-195c-4c51-80df-c3118a12974a", APIVersion:"v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0414 12:30:03.446777       1 volume_store.go:219] persistentvolume "pvc-2014da10-195c-4c51-80df-c3118a12974a" saved
	I0414 12:30:03.453308       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2014da10-195c-4c51-80df-c3118a12974a", APIVersion:"v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-2014da10-195c-4c51-80df-c3118a12974a
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-760045 -n functional-760045
helpers_test.go:261: (dbg) Run:  kubectl --context functional-760045 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-mdpt4 nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-760045 describe pod busybox-mount mysql-58ccfd96bb-mdpt4 nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-760045 describe pod busybox-mount mysql-58ccfd96bb-mdpt4 nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-760045/192.168.39.48
	Start Time:       Mon, 14 Apr 2025 12:30:43 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d267022129b9774026c65c59fbef8a8695d761396266debf5a4d3705b0c40ac5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 14 Apr 2025 12:31:14 +0000
	      Finished:     Mon, 14 Apr 2025 12:31:14 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmts7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-mmts7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m25s  default-scheduler  Successfully assigned default/busybox-mount to functional-760045
	  Normal  Pulling    2m24s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     114s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.342s (30.125s including waiting). Image size: 4631262 bytes.
	  Normal  Created    114s   kubelet            Created container: mount-munger
	  Normal  Started    114s   kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-mdpt4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-760045/192.168.39.48
	Start Time:       Mon, 14 Apr 2025 12:31:27 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d5h96 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-d5h96:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  101s  default-scheduler  Successfully assigned default/mysql-58ccfd96bb-mdpt4 to functional-760045
	  Normal  Pulling    101s  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-760045/192.168.39.48
	Start Time:       Mon, 14 Apr 2025 12:29:57 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bdvxx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bdvxx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m11s                default-scheduler  Successfully assigned default/nginx-svc to functional-760045
	  Warning  Failed     2m38s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     83s (x2 over 2m38s)  kubelet            Error: ErrImagePull
	  Warning  Failed     83s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    67s (x2 over 2m38s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     67s (x2 over 2m38s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    54s (x3 over 3m11s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-760045/192.168.39.48
	Start Time:       Mon, 14 Apr 2025 12:30:05 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6bwl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-g6bwl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m3s                default-scheduler  Successfully assigned default/sp-pod to functional-760045
	  Warning  Failed     117s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     43s (x2 over 117s)  kubelet            Error: ErrImagePull
	  Warning  Failed     43s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    28s (x2 over 116s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     28s (x2 over 116s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    17s (x3 over 3m3s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (190.94s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-760045 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-mdpt4" [7c8c4c0e-fc92-4dde-933d-2daf5e4c8526] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0414 12:31:49.052854 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:32:16.765547 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
2025/04/14 12:32:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1816: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1816: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-760045 -n functional-760045
functional_test.go:1816: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-04-14 12:41:27.623218769 +0000 UTC m=+1361.556425936
functional_test.go:1816: (dbg) Run:  kubectl --context functional-760045 describe po mysql-58ccfd96bb-mdpt4 -n default
functional_test.go:1816: (dbg) kubectl --context functional-760045 describe po mysql-58ccfd96bb-mdpt4 -n default:
Name:             mysql-58ccfd96bb-mdpt4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-760045/192.168.39.48
Start Time:       Mon, 14 Apr 2025 12:31:27 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.14
IPs:
IP:           10.244.0.14
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d5h96 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-d5h96:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-58ccfd96bb-mdpt4 to functional-760045
Warning  Failed     2m (x4 over 8m16s)    kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m (x4 over 8m16s)    kubelet            Error: ErrImagePull
Normal   BackOff    43s (x11 over 8m15s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     43s (x11 over 8m15s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    28s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1816: (dbg) Run:  kubectl --context functional-760045 logs mysql-58ccfd96bb-mdpt4 -n default
functional_test.go:1816: (dbg) Non-zero exit: kubectl --context functional-760045 logs mysql-58ccfd96bb-mdpt4 -n default: exit status 1 (80.831025ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-mdpt4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1816: kubectl --context functional-760045 logs mysql-58ccfd96bb-mdpt4 -n default: exit status 1
functional_test.go:1818: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-760045 -n functional-760045
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-760045 logs -n 25: (1.663174266s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-760045 ssh sudo cat                                          | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /etc/ssl/certs/11757462.pem                                             |                   |         |         |                     |                     |
	| ssh            | functional-760045 ssh sudo cat                                          | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /usr/share/ca-certificates/11757462.pem                                 |                   |         |         |                     |                     |
	| ssh            | functional-760045 ssh sudo cat                                          | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                               |                   |         |         |                     |                     |
	| license        |                                                                         | minikube          | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	| ssh            | functional-760045 ssh sudo                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC |                     |
	|                | systemctl is-active docker                                              |                   |         |         |                     |                     |
	| ssh            | functional-760045 ssh sudo                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC |                     |
	|                | systemctl is-active containerd                                          |                   |         |         |                     |                     |
	| image          | functional-760045 image load --daemon                                   | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | kicbase/echo-server:functional-760045                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045 image ls                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	| image          | functional-760045 image load --daemon                                   | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | kicbase/echo-server:functional-760045                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045 image ls                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	| image          | functional-760045 image save kicbase/echo-server:functional-760045      | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045 image rm                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | kicbase/echo-server:functional-760045                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045 image ls                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	| image          | functional-760045 image load                                            | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-760045 ssh sudo cat                                          | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:31 UTC | 14 Apr 25 12:31 UTC |
	|                | /etc/test/nested/copy/1175746/hosts                                     |                   |         |         |                     |                     |
	| image          | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | image ls --format short                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | image ls --format yaml                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-760045 ssh pgrep                                             | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC |                     |
	|                | buildkitd                                                               |                   |         |         |                     |                     |
	| image          | functional-760045 image build -t                                        | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | localhost/my-image:functional-760045                                    |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-760045 image ls                                              | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	| image          | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | image ls --format table                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | image ls --format json                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| update-context | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-760045                                                       | functional-760045 | jenkins | v1.35.0 | 14 Apr 25 12:32 UTC | 14 Apr 25 12:32 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:30:43
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:30:43.547838 1183689 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:30:43.548264 1183689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:30:43.548284 1183689 out.go:358] Setting ErrFile to fd 2...
	I0414 12:30:43.548293 1183689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:30:43.548634 1183689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 12:30:43.549280 1183689 out.go:352] Setting JSON to false
	I0414 12:30:43.550584 1183689 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15191,"bootTime":1744618653,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:30:43.550714 1183689 start.go:139] virtualization: kvm guest
	I0414 12:30:43.553120 1183689 out.go:177] * [functional-760045] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:30:43.555095 1183689 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 12:30:43.555108 1183689 notify.go:220] Checking for updates...
	I0414 12:30:43.558295 1183689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:30:43.559959 1183689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 12:30:43.561746 1183689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 12:30:43.563374 1183689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:30:43.564876 1183689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:30:43.566745 1183689 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:30:43.567233 1183689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:30:43.567360 1183689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:30:43.586143 1183689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0414 12:30:43.586654 1183689 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:30:43.587291 1183689 main.go:141] libmachine: Using API Version  1
	I0414 12:30:43.587310 1183689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:30:43.587764 1183689 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:30:43.588013 1183689 main.go:141] libmachine: (functional-760045) Calling .DriverName
	I0414 12:30:43.588336 1183689 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:30:43.588656 1183689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:30:43.588704 1183689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:30:43.605224 1183689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36995
	I0414 12:30:43.605913 1183689 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:30:43.606553 1183689 main.go:141] libmachine: Using API Version  1
	I0414 12:30:43.606573 1183689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:30:43.607074 1183689 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:30:43.607344 1183689 main.go:141] libmachine: (functional-760045) Calling .DriverName
	I0414 12:30:43.651464 1183689 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 12:30:43.653085 1183689 start.go:297] selected driver: kvm2
	I0414 12:30:43.653112 1183689 start.go:901] validating driver "kvm2" against &{Name:functional-760045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-760045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:30:43.653253 1183689 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:30:43.654362 1183689 cni.go:84] Creating CNI manager for ""
	I0414 12:30:43.654419 1183689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:30:43.654472 1183689 start.go:340] cluster config:
	{Name:functional-760045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-760045 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:30:43.656713 1183689 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.542474709Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634488542445301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e9da5eb-6e4e-49ba-bbf9-ee9f8ee36993 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.543180688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4515fa6-df3a-4a6a-b6fa-cc6ff6eec3df name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.543329478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4515fa6-df3a-4a6a-b6fa-cc6ff6eec3df name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.543826181Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cbad5ed611c0eba69f86bcf0ab8537b10e2e840003883ff68795ab32e27c7e8,PodSandboxId:090136b8086e52d18ddfd121f4aad5a8f34828376447c06ae252035542f4a57c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744633914404854095,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-chgtk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 99f2e74c-7f7a-4849-a298-3eccf7ce50ba,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4589cfa0059b8034d234e2f1aab9c1a6fa015423a4a6a89e481feb427bb248,PodSandboxId:a86aff1655956d9b0c55d74f060ba4d26d8dc2ddd9164adbb1e2b6f2c878539a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744633908724732398,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-z4vd8,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4cd0930a-ab95-47c9-b854-60d4d02c1899,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d267022129b9774026c65c59fbef8a8695d761396266debf5a4d3705b0c40ac5,PodSandboxId:bbc7a9734e4a5f79f6e3f4672572c3e1066f6e008a76b85a14d327ff59cc9944,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744633874560295396,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb6157d-76c5-48e2-a0f6-eb478bf84611,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25396a72d045eade6f158fec8ee8e450282199e20553060df500f6d07301a5,PodSandboxId:fdca15f19904fb16e8641ed654773e98dcb64355ed9838291b4ecaf03a751fbb,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833330170913,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-m9jsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78c2bfff-6f14-4194-b794-ee10b06219da,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d35b9603ae6ddaa2fd9425a00d44e4f06eecd3fca6a494734797b33df862a7a,PodSandboxId:7ac6321e04f9addd7e44ff6b2a6aa5a9a273619c2b8624c85326335a3d470e8c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833239696721,Labels:map[string]string{
io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-klbg6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3411c5b6-bfba-4c65-b22a-1375ad82b576,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f2e0b58d2f3d7dfd257100e3b5bfcdf2704fb442ea950363a7b76fa03164d,PodSandboxId:9f7b911af306be6bfe1580264196b8fd71cbd3c174f28aaf7520338e85ea3ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633775222416762,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d0aa2a13f3b9e963a5525a98316fdb450bdedd36f34b8807639fc299fd728e,PodSandboxId:4e1ef32351550387eb9b52365521b118d80faf9603d5cce495f2f9d6b17427a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633774670495721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e530384f37b390f123f9938b79fbf3be5157db5a590a6544ca36af743bc39b1c,PodSandboxId:dba57defd9231bb47ece00be2daed649a29c184bad1e1b79deae1065e817d43a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633774663612816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708549be1081b704ef9ac4242734030c4d88bf88d3fd1c7901014d85549b1358,PodSandboxId:da2463b9deb985b8d3a88c510b0bac83de1d809335f15dddc4058ba6e0f17012,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a1747
38baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633770973864151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2591a1078d77eeb97d04aada88e7b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d355b0cf1e9f8799ffb38b5c24435802ebadef16716cb1d1a24263db78bef5ae,PodSandboxId:7fcf91bcdaea44a5969fd7514fd4fef7013bc36e84e1bc44c15cdc8e4f0f8e4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a
9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633770870792994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde8cbf028726ed80b1bc9b94ee92811ad76b704ee8721f1458f774c86feb5e7,PodSandboxId:9c9534901c6c8c0fae018b27408c5b4b88723ea4135ce110170c2f39ee855056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff
195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633770825814387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016878b0ed0a6d4f8a7c2d3161cb1c8a23e3829a0b468ebd97f4cf25b5f23c75,PodSandboxId:c662ecf27d35bb6ba782c62cb4c8775a4dc39536a8a8055c724d48a5f20840f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633770858986624,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d59fc41d941c2eaf6cdebce8611d06daaafcb9eaafcc0cef5b05b315973518,PodSandboxId:3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a30
2a562,State:CONTAINER_EXITED,CreatedAt:1744633744246900443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08dc9296e80d47e6714ed00eb35508495621996f34120627ac6753ae4cd62,PodSandboxId:14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_
EXITED,CreatedAt:1744633732548629328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cc318416a2e93e0f486990973f00fbe7eadce25e6d4c5a9df4b5864b134eb,PodSandboxId:47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744633732511353905,L
abels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0bd14547fc66bbf0334984d041096ff9389559194f95a61071dd4f76462fc4,PodSandboxId:2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bc
a912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744633727923887746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c7b28e5bcd8258b42a5bb54da8259e6937eb70a21fec15edbfde289b7d3bfb,PodSandboxId:ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247
abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744633727862506473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e58c4d42bf0be189248882fccc4f92d682942ad3be7b4c1dc9664fa95be5b42,PodSandboxId:a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744633727892806125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4515fa6-df3a-4a6a-b6fa-cc6ff6eec3df name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.593721330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05ef0b69-2e7f-43a9-8e37-cb07a61e43b1 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.593798672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05ef0b69-2e7f-43a9-8e37-cb07a61e43b1 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.596304773Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5be4f002-ed76-421c-ac04-ce431a9eda45 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.596977201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634488596952606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5be4f002-ed76-421c-ac04-ce431a9eda45 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.597992002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbec6d65-7aa2-4382-8996-0478865a4296 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.598122297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbec6d65-7aa2-4382-8996-0478865a4296 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.598461758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cbad5ed611c0eba69f86bcf0ab8537b10e2e840003883ff68795ab32e27c7e8,PodSandboxId:090136b8086e52d18ddfd121f4aad5a8f34828376447c06ae252035542f4a57c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744633914404854095,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-chgtk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 99f2e74c-7f7a-4849-a298-3eccf7ce50ba,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4589cfa0059b8034d234e2f1aab9c1a6fa015423a4a6a89e481feb427bb248,PodSandboxId:a86aff1655956d9b0c55d74f060ba4d26d8dc2ddd9164adbb1e2b6f2c878539a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744633908724732398,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-z4vd8,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4cd0930a-ab95-47c9-b854-60d4d02c1899,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d267022129b9774026c65c59fbef8a8695d761396266debf5a4d3705b0c40ac5,PodSandboxId:bbc7a9734e4a5f79f6e3f4672572c3e1066f6e008a76b85a14d327ff59cc9944,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744633874560295396,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb6157d-76c5-48e2-a0f6-eb478bf84611,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25396a72d045eade6f158fec8ee8e450282199e20553060df500f6d07301a5,PodSandboxId:fdca15f19904fb16e8641ed654773e98dcb64355ed9838291b4ecaf03a751fbb,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833330170913,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-m9jsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78c2bfff-6f14-4194-b794-ee10b06219da,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d35b9603ae6ddaa2fd9425a00d44e4f06eecd3fca6a494734797b33df862a7a,PodSandboxId:7ac6321e04f9addd7e44ff6b2a6aa5a9a273619c2b8624c85326335a3d470e8c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833239696721,Labels:map[string]string{
io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-klbg6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3411c5b6-bfba-4c65-b22a-1375ad82b576,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f2e0b58d2f3d7dfd257100e3b5bfcdf2704fb442ea950363a7b76fa03164d,PodSandboxId:9f7b911af306be6bfe1580264196b8fd71cbd3c174f28aaf7520338e85ea3ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633775222416762,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d0aa2a13f3b9e963a5525a98316fdb450bdedd36f34b8807639fc299fd728e,PodSandboxId:4e1ef32351550387eb9b52365521b118d80faf9603d5cce495f2f9d6b17427a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633774670495721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e530384f37b390f123f9938b79fbf3be5157db5a590a6544ca36af743bc39b1c,PodSandboxId:dba57defd9231bb47ece00be2daed649a29c184bad1e1b79deae1065e817d43a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633774663612816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708549be1081b704ef9ac4242734030c4d88bf88d3fd1c7901014d85549b1358,PodSandboxId:da2463b9deb985b8d3a88c510b0bac83de1d809335f15dddc4058ba6e0f17012,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a1747
38baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633770973864151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2591a1078d77eeb97d04aada88e7b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d355b0cf1e9f8799ffb38b5c24435802ebadef16716cb1d1a24263db78bef5ae,PodSandboxId:7fcf91bcdaea44a5969fd7514fd4fef7013bc36e84e1bc44c15cdc8e4f0f8e4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a
9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633770870792994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde8cbf028726ed80b1bc9b94ee92811ad76b704ee8721f1458f774c86feb5e7,PodSandboxId:9c9534901c6c8c0fae018b27408c5b4b88723ea4135ce110170c2f39ee855056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff
195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633770825814387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016878b0ed0a6d4f8a7c2d3161cb1c8a23e3829a0b468ebd97f4cf25b5f23c75,PodSandboxId:c662ecf27d35bb6ba782c62cb4c8775a4dc39536a8a8055c724d48a5f20840f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633770858986624,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d59fc41d941c2eaf6cdebce8611d06daaafcb9eaafcc0cef5b05b315973518,PodSandboxId:3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a30
2a562,State:CONTAINER_EXITED,CreatedAt:1744633744246900443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08dc9296e80d47e6714ed00eb35508495621996f34120627ac6753ae4cd62,PodSandboxId:14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_
EXITED,CreatedAt:1744633732548629328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cc318416a2e93e0f486990973f00fbe7eadce25e6d4c5a9df4b5864b134eb,PodSandboxId:47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744633732511353905,L
abels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0bd14547fc66bbf0334984d041096ff9389559194f95a61071dd4f76462fc4,PodSandboxId:2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bc
a912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744633727923887746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c7b28e5bcd8258b42a5bb54da8259e6937eb70a21fec15edbfde289b7d3bfb,PodSandboxId:ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247
abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744633727862506473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e58c4d42bf0be189248882fccc4f92d682942ad3be7b4c1dc9664fa95be5b42,PodSandboxId:a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744633727892806125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbec6d65-7aa2-4382-8996-0478865a4296 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.642218540Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f2c9d96-4ead-47e2-b920-7b6d156198ea name=/runtime.v1.RuntimeService/Version
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.642395319Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f2c9d96-4ead-47e2-b920-7b6d156198ea name=/runtime.v1.RuntimeService/Version
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.644332774Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48df49c0-2da1-4d9d-b7b6-ddd368dbba28 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.645104349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634488645008755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48df49c0-2da1-4d9d-b7b6-ddd368dbba28 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.645869073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c1b2e7b-e033-493b-b633-b6418337d30d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.645949429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c1b2e7b-e033-493b-b633-b6418337d30d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.646340290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cbad5ed611c0eba69f86bcf0ab8537b10e2e840003883ff68795ab32e27c7e8,PodSandboxId:090136b8086e52d18ddfd121f4aad5a8f34828376447c06ae252035542f4a57c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744633914404854095,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-chgtk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 99f2e74c-7f7a-4849-a298-3eccf7ce50ba,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4589cfa0059b8034d234e2f1aab9c1a6fa015423a4a6a89e481feb427bb248,PodSandboxId:a86aff1655956d9b0c55d74f060ba4d26d8dc2ddd9164adbb1e2b6f2c878539a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744633908724732398,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-z4vd8,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4cd0930a-ab95-47c9-b854-60d4d02c1899,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d267022129b9774026c65c59fbef8a8695d761396266debf5a4d3705b0c40ac5,PodSandboxId:bbc7a9734e4a5f79f6e3f4672572c3e1066f6e008a76b85a14d327ff59cc9944,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744633874560295396,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb6157d-76c5-48e2-a0f6-eb478bf84611,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25396a72d045eade6f158fec8ee8e450282199e20553060df500f6d07301a5,PodSandboxId:fdca15f19904fb16e8641ed654773e98dcb64355ed9838291b4ecaf03a751fbb,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833330170913,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-m9jsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78c2bfff-6f14-4194-b794-ee10b06219da,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d35b9603ae6ddaa2fd9425a00d44e4f06eecd3fca6a494734797b33df862a7a,PodSandboxId:7ac6321e04f9addd7e44ff6b2a6aa5a9a273619c2b8624c85326335a3d470e8c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833239696721,Labels:map[string]string{
io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-klbg6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3411c5b6-bfba-4c65-b22a-1375ad82b576,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f2e0b58d2f3d7dfd257100e3b5bfcdf2704fb442ea950363a7b76fa03164d,PodSandboxId:9f7b911af306be6bfe1580264196b8fd71cbd3c174f28aaf7520338e85ea3ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633775222416762,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d0aa2a13f3b9e963a5525a98316fdb450bdedd36f34b8807639fc299fd728e,PodSandboxId:4e1ef32351550387eb9b52365521b118d80faf9603d5cce495f2f9d6b17427a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633774670495721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e530384f37b390f123f9938b79fbf3be5157db5a590a6544ca36af743bc39b1c,PodSandboxId:dba57defd9231bb47ece00be2daed649a29c184bad1e1b79deae1065e817d43a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633774663612816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708549be1081b704ef9ac4242734030c4d88bf88d3fd1c7901014d85549b1358,PodSandboxId:da2463b9deb985b8d3a88c510b0bac83de1d809335f15dddc4058ba6e0f17012,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a1747
38baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633770973864151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2591a1078d77eeb97d04aada88e7b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d355b0cf1e9f8799ffb38b5c24435802ebadef16716cb1d1a24263db78bef5ae,PodSandboxId:7fcf91bcdaea44a5969fd7514fd4fef7013bc36e84e1bc44c15cdc8e4f0f8e4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a
9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633770870792994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde8cbf028726ed80b1bc9b94ee92811ad76b704ee8721f1458f774c86feb5e7,PodSandboxId:9c9534901c6c8c0fae018b27408c5b4b88723ea4135ce110170c2f39ee855056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff
195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633770825814387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016878b0ed0a6d4f8a7c2d3161cb1c8a23e3829a0b468ebd97f4cf25b5f23c75,PodSandboxId:c662ecf27d35bb6ba782c62cb4c8775a4dc39536a8a8055c724d48a5f20840f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633770858986624,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d59fc41d941c2eaf6cdebce8611d06daaafcb9eaafcc0cef5b05b315973518,PodSandboxId:3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a30
2a562,State:CONTAINER_EXITED,CreatedAt:1744633744246900443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08dc9296e80d47e6714ed00eb35508495621996f34120627ac6753ae4cd62,PodSandboxId:14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_
EXITED,CreatedAt:1744633732548629328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cc318416a2e93e0f486990973f00fbe7eadce25e6d4c5a9df4b5864b134eb,PodSandboxId:47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744633732511353905,L
abels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0bd14547fc66bbf0334984d041096ff9389559194f95a61071dd4f76462fc4,PodSandboxId:2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bc
a912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744633727923887746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c7b28e5bcd8258b42a5bb54da8259e6937eb70a21fec15edbfde289b7d3bfb,PodSandboxId:ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247
abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744633727862506473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e58c4d42bf0be189248882fccc4f92d682942ad3be7b4c1dc9664fa95be5b42,PodSandboxId:a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744633727892806125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c1b2e7b-e033-493b-b633-b6418337d30d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.686467310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca7e6792-f08d-43c6-8f4e-c2b5eaa76366 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.686544175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca7e6792-f08d-43c6-8f4e-c2b5eaa76366 name=/runtime.v1.RuntimeService/Version
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.687960952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3615eb0a-e6db-4df5-92c1-7fe936b4672e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.688672748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634488688645551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3615eb0a-e6db-4df5-92c1-7fe936b4672e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.689482653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec76ad75-2857-4a4e-b1c1-1be5c48ff33b name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.689543932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec76ad75-2857-4a4e-b1c1-1be5c48ff33b name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 12:41:28 functional-760045 crio[5227]: time="2025-04-14 12:41:28.689892902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cbad5ed611c0eba69f86bcf0ab8537b10e2e840003883ff68795ab32e27c7e8,PodSandboxId:090136b8086e52d18ddfd121f4aad5a8f34828376447c06ae252035542f4a57c,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744633914404854095,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-chgtk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 99f2e74c-7f7a-4849-a298-3eccf7ce50ba,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4589cfa0059b8034d234e2f1aab9c1a6fa015423a4a6a89e481feb427bb248,PodSandboxId:a86aff1655956d9b0c55d74f060ba4d26d8dc2ddd9164adbb1e2b6f2c878539a,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744633908724732398,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-z4vd8,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4cd0930a-ab95-47c9-b854-60d4d02c1899,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d267022129b9774026c65c59fbef8a8695d761396266debf5a4d3705b0c40ac5,PodSandboxId:bbc7a9734e4a5f79f6e3f4672572c3e1066f6e008a76b85a14d327ff59cc9944,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744633874560295396,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7cb6157d-76c5-48e2-a0f6-eb478bf84611,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25396a72d045eade6f158fec8ee8e450282199e20553060df500f6d07301a5,PodSandboxId:fdca15f19904fb16e8641ed654773e98dcb64355ed9838291b4ecaf03a751fbb,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833330170913,Labels:map[string]string{io.kube
rnetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-m9jsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 78c2bfff-6f14-4194-b794-ee10b06219da,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d35b9603ae6ddaa2fd9425a00d44e4f06eecd3fca6a494734797b33df862a7a,PodSandboxId:7ac6321e04f9addd7e44ff6b2a6aa5a9a273619c2b8624c85326335a3d470e8c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744633833239696721,Labels:map[string]string{
io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-klbg6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3411c5b6-bfba-4c65-b22a-1375ad82b576,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747f2e0b58d2f3d7dfd257100e3b5bfcdf2704fb442ea950363a7b76fa03164d,PodSandboxId:9f7b911af306be6bfe1580264196b8fd71cbd3c174f28aaf7520338e85ea3ece,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744633775222416762,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d0aa2a13f3b9e963a5525a98316fdb450bdedd36f34b8807639fc299fd728e,PodSandboxId:4e1ef32351550387eb9b52365521b118d80faf9603d5cce495f2f9d6b17427a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744633774670495721,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e530384f37b390f123f9938b79fbf3be5157db5a590a6544ca36af743bc39b1c,PodSandboxId:dba57defd9231bb47ece00be2daed649a29c184bad1e1b79deae1065e817d43a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744633774663612816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708549be1081b704ef9ac4242734030c4d88bf88d3fd1c7901014d85549b1358,PodSandboxId:da2463b9deb985b8d3a88c510b0bac83de1d809335f15dddc4058ba6e0f17012,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a1747
38baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744633770973864151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2591a1078d77eeb97d04aada88e7b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d355b0cf1e9f8799ffb38b5c24435802ebadef16716cb1d1a24263db78bef5ae,PodSandboxId:7fcf91bcdaea44a5969fd7514fd4fef7013bc36e84e1bc44c15cdc8e4f0f8e4c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a
9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744633770870792994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cde8cbf028726ed80b1bc9b94ee92811ad76b704ee8721f1458f774c86feb5e7,PodSandboxId:9c9534901c6c8c0fae018b27408c5b4b88723ea4135ce110170c2f39ee855056,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff
195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744633770825814387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016878b0ed0a6d4f8a7c2d3161cb1c8a23e3829a0b468ebd97f4cf25b5f23c75,PodSandboxId:c662ecf27d35bb6ba782c62cb4c8775a4dc39536a8a8055c724d48a5f20840f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d
3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744633770858986624,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d59fc41d941c2eaf6cdebce8611d06daaafcb9eaafcc0cef5b05b315973518,PodSandboxId:3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a30
2a562,State:CONTAINER_EXITED,CreatedAt:1744633744246900443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b300b501-5ebd-47df-a358-754ea70df398,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dab08dc9296e80d47e6714ed00eb35508495621996f34120627ac6753ae4cd62,PodSandboxId:14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_
EXITED,CreatedAt:1744633732548629328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vf98w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b43cc318416a2e93e0f486990973f00fbe7eadce25e6d4c5a9df4b5864b134eb,PodSandboxId:47de29e77014fc8c2534f407cfb40d25809e17d437b2e8678b870f763129f792,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744633732511353905,L
abels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h7t9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d520f2d-d301-4af3-ad63-624916ce0305,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0bd14547fc66bbf0334984d041096ff9389559194f95a61071dd4f76462fc4,PodSandboxId:2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bc
a912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744633727923887746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e49f19f008c8999723c33183b5ea26,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c7b28e5bcd8258b42a5bb54da8259e6937eb70a21fec15edbfde289b7d3bfb,PodSandboxId:ba5296a9c05274d5096b6054afa9d862d22bd874013d55555aa694667aa300af,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247
abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744633727862506473,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1afd23a171634d4b565b0afc9c52f067,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e58c4d42bf0be189248882fccc4f92d682942ad3be7b4c1dc9664fa95be5b42,PodSandboxId:a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744633727892806125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-760045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7adf4ebe949d159f6be06adfa228b5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec76ad75-2857-4a4e-b1c1-1be5c48ff33b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	3cbad5ed611c0       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 minutes ago       Running             kubernetes-dashboard        0                   090136b8086e5       kubernetes-dashboard-7779f9b69b-chgtk
	ff4589cfa0059       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   a86aff1655956       dashboard-metrics-scraper-5d59dccf9b-z4vd8
	d267022129b97       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   bbc7a9734e4a5       busybox-mount
	ef25396a72d04       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   fdca15f19904f       hello-node-fcfd88b6f-m9jsj
	3d35b9603ae6d       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   7ac6321e04f9a       hello-node-connect-58f9cf68d8-klbg6
	747f2e0b58d2f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Running             coredns                     3                   9f7b911af306b       coredns-668d6bf9bc-h7t9g
	39d0aa2a13f3b       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                 11 minutes ago      Running             kube-proxy                  3                   4e1ef32351550       kube-proxy-vf98w
	e530384f37b39       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         5                   dba57defd9231       storage-provisioner
	708549be1081b       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                 11 minutes ago      Running             kube-apiserver              0                   da2463b9deb98       kube-apiserver-functional-760045
	d355b0cf1e9f8       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                 11 minutes ago      Running             kube-scheduler              3                   7fcf91bcdaea4       kube-scheduler-functional-760045
	016878b0ed0a6       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 11 minutes ago      Running             etcd                        3                   c662ecf27d35b       etcd-functional-760045
	cde8cbf028726       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                 11 minutes ago      Running             kube-controller-manager     3                   9c9534901c6c8       kube-controller-manager-functional-760045
	55d59fc41d941       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 12 minutes ago      Exited              storage-provisioner         4                   3b40dcff38bcb       storage-provisioner
	dab08dc9296e8       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                 12 minutes ago      Exited              kube-proxy                  2                   14e3608834252       kube-proxy-vf98w
	b43cc318416a2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 12 minutes ago      Exited              coredns                     2                   47de29e77014f       coredns-668d6bf9bc-h7t9g
	0f0bd14547fc6       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                 12 minutes ago      Exited              kube-controller-manager     2                   2b2b899806d88       kube-controller-manager-functional-760045
	2e58c4d42bf0b       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                 12 minutes ago      Exited              kube-scheduler              2                   a702732cd9018       kube-scheduler-functional-760045
	f9c7b28e5bcd8       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 12 minutes ago      Exited              etcd                        2                   ba5296a9c0527       etcd-functional-760045
	
	
	==> coredns [747f2e0b58d2f3d7dfd257100e3b5bfcdf2704fb442ea950363a7b76fa03164d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50058 - 7048 "HINFO IN 1014515586151066443.8180690758486996107. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020501932s
	
	
	==> coredns [b43cc318416a2e93e0f486990973f00fbe7eadce25e6d4c5a9df4b5864b134eb] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47029 - 31991 "HINFO IN 3327528597732309114.3443458371943165367. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021934573s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-760045
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-760045
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9d10b41d083ee2064b1b8c7e16503e13b1847696
	                    minikube.k8s.io/name=functional-760045
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T12_28_01_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 12:27:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-760045
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 12:41:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 12:37:42 +0000   Mon, 14 Apr 2025 12:27:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 12:37:42 +0000   Mon, 14 Apr 2025 12:27:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 12:37:42 +0000   Mon, 14 Apr 2025 12:27:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 12:37:42 +0000   Mon, 14 Apr 2025 12:28:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    functional-760045
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d8f62d93b6f435eb1742e0f1657f644
	  System UUID:                6d8f62d9-3b6f-435e-b174-2e0f1657f644
	  Boot ID:                    63c61889-dd95-4e57-b298-791ea61155d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-klbg6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-fcfd88b6f-m9jsj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     mysql-58ccfd96bb-mdpt4                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-668d6bf9bc-h7t9g                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-functional-760045                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-760045              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-760045     200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-vf98w                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-760045              100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-z4vd8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-chgtk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node functional-760045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node functional-760045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node functional-760045 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m                kubelet          Node functional-760045 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node functional-760045 event: Registered Node functional-760045 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node functional-760045 event: Registered Node functional-760045 in Controller
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-760045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-760045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-760045 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node functional-760045 event: Registered Node functional-760045 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-760045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-760045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-760045 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-760045 event: Registered Node functional-760045 in Controller
	
	
	==> dmesg <==
	[  +4.080254] kauditd_printk_skb: 213 callbacks suppressed
	[ +18.256468] systemd-fstab-generator[3734]: Ignoring "noauto" option for root device
	[  +0.361031] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.283319] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.741119] kauditd_printk_skb: 8 callbacks suppressed
	[Apr14 12:29] systemd-fstab-generator[4299]: Ignoring "noauto" option for root device
	[ +18.245002] systemd-fstab-generator[5152]: Ignoring "noauto" option for root device
	[  +0.081583] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.061415] systemd-fstab-generator[5164]: Ignoring "noauto" option for root device
	[  +0.176463] systemd-fstab-generator[5178]: Ignoring "noauto" option for root device
	[  +0.150757] systemd-fstab-generator[5190]: Ignoring "noauto" option for root device
	[  +0.294243] systemd-fstab-generator[5218]: Ignoring "noauto" option for root device
	[  +0.798939] systemd-fstab-generator[5341]: Ignoring "noauto" option for root device
	[  +2.340273] systemd-fstab-generator[5704]: Ignoring "noauto" option for root device
	[  +4.321732] kauditd_printk_skb: 210 callbacks suppressed
	[ +11.542173] systemd-fstab-generator[6401]: Ignoring "noauto" option for root device
	[  +0.090041] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.489959] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.142503] kauditd_printk_skb: 21 callbacks suppressed
	[Apr14 12:30] kauditd_printk_skb: 32 callbacks suppressed
	[ +25.342906] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.225639] kauditd_printk_skb: 6 callbacks suppressed
	[Apr14 12:31] kauditd_printk_skb: 32 callbacks suppressed
	[ +12.682867] kauditd_printk_skb: 3 callbacks suppressed
	[ +21.468175] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [016878b0ed0a6d4f8a7c2d3161cb1c8a23e3829a0b468ebd97f4cf25b5f23c75] <==
	{"level":"info","ts":"2025-04-14T12:29:32.178945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 received MsgVoteResp from 7a50af7ffd27cbe1 at term 5"}
	{"level":"info","ts":"2025-04-14T12:29:32.178953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became leader at term 5"}
	{"level":"info","ts":"2025-04-14T12:29:32.178983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7a50af7ffd27cbe1 elected leader 7a50af7ffd27cbe1 at term 5"}
	{"level":"info","ts":"2025-04-14T12:29:32.186114Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"7a50af7ffd27cbe1","local-member-attributes":"{Name:functional-760045 ClientURLs:[https://192.168.39.48:2379]}","request-path":"/0/members/7a50af7ffd27cbe1/attributes","cluster-id":"59383b002ca7add2","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T12:29:32.186405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:29:32.186482Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:29:32.187695Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:29:32.187842Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T12:29:32.187876Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T12:29:32.188399Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:29:32.190772Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T12:29:32.189004Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.48:2379"}
	{"level":"info","ts":"2025-04-14T12:30:34.121767Z","caller":"traceutil/trace.go:171","msg":"trace[395416169] transaction","detail":"{read_only:false; response_revision:821; number_of_response:1; }","duration":"222.293458ms","start":"2025-04-14T12:30:33.899457Z","end":"2025-04-14T12:30:34.121751Z","steps":["trace[395416169] 'process raft request'  (duration: 133.830127ms)","trace[395416169] 'compare'  (duration: 88.391535ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T12:31:53.484246Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.571986ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:31:53.484355Z","caller":"traceutil/trace.go:171","msg":"trace[782028940] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:995; }","duration":"263.719371ms","start":"2025-04-14T12:31:53.220624Z","end":"2025-04-14T12:31:53.484344Z","steps":["trace[782028940] 'range keys from in-memory index tree'  (duration: 263.52016ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:31:53.484364Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.941498ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:31:53.484405Z","caller":"traceutil/trace.go:171","msg":"trace[1997673181] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:995; }","duration":"243.006878ms","start":"2025-04-14T12:31:53.241386Z","end":"2025-04-14T12:31:53.484393Z","steps":["trace[1997673181] 'range keys from in-memory index tree'  (duration: 242.892554ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:31:53.484557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.182307ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-04-14T12:31:53.484565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"357.840646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T12:31:53.484573Z","caller":"traceutil/trace.go:171","msg":"trace[970696691] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:995; }","duration":"164.218942ms","start":"2025-04-14T12:31:53.320349Z","end":"2025-04-14T12:31:53.484567Z","steps":["trace[970696691] 'range keys from in-memory index tree'  (duration: 164.143609ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T12:31:53.484581Z","caller":"traceutil/trace.go:171","msg":"trace[532552859] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:995; }","duration":"357.881756ms","start":"2025-04-14T12:31:53.126694Z","end":"2025-04-14T12:31:53.484576Z","steps":["trace[532552859] 'range keys from in-memory index tree'  (duration: 357.752041ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T12:31:53.484596Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T12:31:53.126679Z","time spent":"357.912218ms","remote":"127.0.0.1:51382","response type":"/etcdserverpb.KV/Range","request count":0,"request size":28,"response count":0,"response size":27,"request content":"key:\"/registry/services/specs\" limit:1 "}
	{"level":"info","ts":"2025-04-14T12:39:32.220840Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1171}
	{"level":"info","ts":"2025-04-14T12:39:32.246415Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1171,"took":"25.168623ms","hash":3611962343,"current-db-size-bytes":4276224,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":1859584,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-04-14T12:39:32.246907Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3611962343,"revision":1171,"compact-revision":-1}
	
	
	==> etcd [f9c7b28e5bcd8258b42a5bb54da8259e6937eb70a21fec15edbfde289b7d3bfb] <==
	{"level":"info","ts":"2025-04-14T12:28:50.108966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-04-14T12:28:50.109052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 received MsgPreVoteResp from 7a50af7ffd27cbe1 at term 3"}
	{"level":"info","ts":"2025-04-14T12:28:50.109086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became candidate at term 4"}
	{"level":"info","ts":"2025-04-14T12:28:50.109110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 received MsgVoteResp from 7a50af7ffd27cbe1 at term 4"}
	{"level":"info","ts":"2025-04-14T12:28:50.109130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became leader at term 4"}
	{"level":"info","ts":"2025-04-14T12:28:50.109149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7a50af7ffd27cbe1 elected leader 7a50af7ffd27cbe1 at term 4"}
	{"level":"info","ts":"2025-04-14T12:28:50.115593Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"7a50af7ffd27cbe1","local-member-attributes":"{Name:functional-760045 ClientURLs:[https://192.168.39.48:2379]}","request-path":"/0/members/7a50af7ffd27cbe1/attributes","cluster-id":"59383b002ca7add2","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T12:28:50.115714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:28:50.116184Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T12:28:50.116641Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:28:50.116745Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T12:28:50.117373Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T12:28:50.117476Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.48:2379"}
	{"level":"info","ts":"2025-04-14T12:28:50.117564Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T12:28:50.117590Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T12:29:20.039767Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-14T12:29:20.039832Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-760045","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.48:2380"],"advertise-client-urls":["https://192.168.39.48:2379"]}
	{"level":"warn","ts":"2025-04-14T12:29:20.039905Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T12:29:20.039981Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T12:29:20.123243Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.48:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T12:29:20.123302Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.48:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-14T12:29:20.123351Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7a50af7ffd27cbe1","current-leader-member-id":"7a50af7ffd27cbe1"}
	{"level":"info","ts":"2025-04-14T12:29:20.126701Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.48:2380"}
	{"level":"info","ts":"2025-04-14T12:29:20.126896Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.48:2380"}
	{"level":"info","ts":"2025-04-14T12:29:20.126945Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-760045","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.48:2380"],"advertise-client-urls":["https://192.168.39.48:2379"]}
	
	
	==> kernel <==
	 12:41:29 up 14 min,  0 users,  load average: 0.38, 0.25, 0.19
	Linux functional-760045 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [708549be1081b704ef9ac4242734030c4d88bf88d3fd1c7901014d85549b1358] <==
	I0414 12:29:33.507057       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0414 12:29:33.508698       1 aggregator.go:171] initial CRD sync complete...
	I0414 12:29:33.508802       1 autoregister_controller.go:144] Starting autoregister controller
	I0414 12:29:33.508828       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0414 12:29:33.508919       1 cache.go:39] Caches are synced for autoregister controller
	I0414 12:29:33.514741       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0414 12:29:33.531379       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0414 12:29:33.533500       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0414 12:29:34.300400       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 12:29:34.407392       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 12:29:35.174004       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 12:29:35.240739       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 12:29:35.294008       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 12:29:35.309806       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 12:29:36.679722       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0414 12:29:36.976444       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 12:29:37.026960       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 12:29:52.550376       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.194.213"}
	I0414 12:29:57.424388       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.129.27"}
	I0414 12:29:57.609912       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.255.180"}
	I0414 12:29:58.346543       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.219.196"}
	I0414 12:30:44.867853       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 12:30:45.225695       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.140.31"}
	I0414 12:30:45.266347       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.93.198"}
	I0414 12:31:27.272670       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.235.50"}
	
	
	==> kube-controller-manager [0f0bd14547fc66bbf0334984d041096ff9389559194f95a61071dd4f76462fc4] <==
	I0414 12:28:54.580446       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0414 12:28:54.580483       1 shared_informer.go:320] Caches are synced for deployment
	I0414 12:28:54.580548       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0414 12:28:54.581222       1 shared_informer.go:320] Caches are synced for stateful set
	I0414 12:28:54.586534       1 shared_informer.go:320] Caches are synced for node
	I0414 12:28:54.586760       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0414 12:28:54.586817       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0414 12:28:54.586839       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0414 12:28:54.586863       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0414 12:28:54.586992       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-760045"
	I0414 12:28:54.587356       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 12:28:54.587748       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0414 12:28:54.591565       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 12:28:54.596339       1 shared_informer.go:320] Caches are synced for service account
	I0414 12:28:54.596371       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0414 12:28:54.596641       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="92.054µs"
	I0414 12:28:54.597591       1 shared_informer.go:320] Caches are synced for TTL
	I0414 12:28:54.604896       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 12:28:54.606258       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0414 12:28:54.621744       1 shared_informer.go:320] Caches are synced for persistent volume
	I0414 12:28:54.630100       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 12:28:54.630187       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 12:28:54.630245       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 12:28:58.367923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.577461ms"
	I0414 12:28:58.368111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="110.823µs"
	
	
	==> kube-controller-manager [cde8cbf028726ed80b1bc9b94ee92811ad76b704ee8721f1458f774c86feb5e7] <==
	I0414 12:30:45.134997       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="39.842185ms"
	I0414 12:30:45.192531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="57.438781ms"
	I0414 12:30:45.192691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="62.031µs"
	I0414 12:30:45.192767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="67.109µs"
	I0414 12:31:05.000294       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-760045"
	I0414 12:31:27.358376       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="39.061865ms"
	I0414 12:31:27.368049       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="9.511341ms"
	I0414 12:31:27.368197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="94.683µs"
	I0414 12:31:27.378460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="73.367µs"
	I0414 12:31:35.349236       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-760045"
	I0414 12:31:49.219415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="14.716497ms"
	I0414 12:31:49.219499       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="40.371µs"
	I0414 12:31:55.267495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="18.593578ms"
	I0414 12:31:55.267821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="63.259µs"
	I0414 12:32:06.259174       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-760045"
	I0414 12:33:07.518090       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-760045"
	I0414 12:33:12.619357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="58.284µs"
	I0414 12:33:25.402944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="51.617µs"
	I0414 12:35:30.409523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="52.842µs"
	I0414 12:35:43.401756       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="52.036µs"
	I0414 12:37:34.408267       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="731.558µs"
	I0414 12:37:42.094416       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-760045"
	I0414 12:37:46.403239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="43.007µs"
	I0414 12:39:41.409378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="105.723µs"
	I0414 12:39:52.404153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="48.089µs"
	
	
	==> kube-proxy [39d0aa2a13f3b9e963a5525a98316fdb450bdedd36f34b8807639fc299fd728e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 12:29:35.076835       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 12:29:35.086849       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	E0414 12:29:35.086997       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 12:29:35.158825       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 12:29:35.158855       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 12:29:35.158878       1 server_linux.go:170] "Using iptables Proxier"
	I0414 12:29:35.162459       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 12:29:35.162762       1 server.go:497] "Version info" version="v1.32.2"
	I0414 12:29:35.162991       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:29:35.164275       1 config.go:199] "Starting service config controller"
	I0414 12:29:35.164379       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 12:29:35.164446       1 config.go:105] "Starting endpoint slice config controller"
	I0414 12:29:35.164463       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 12:29:35.164994       1 config.go:329] "Starting node config controller"
	I0414 12:29:35.165102       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 12:29:35.264544       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 12:29:35.264602       1 shared_informer.go:320] Caches are synced for service config
	I0414 12:29:35.265929       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [dab08dc9296e80d47e6714ed00eb35508495621996f34120627ac6753ae4cd62] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 12:28:52.870736       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 12:28:52.895096       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	E0414 12:28:52.895154       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 12:28:52.982547       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 12:28:52.983079       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 12:28:52.983308       1 server_linux.go:170] "Using iptables Proxier"
	I0414 12:28:52.989875       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 12:28:52.990170       1 server.go:497] "Version info" version="v1.32.2"
	I0414 12:28:52.990196       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:28:52.991741       1 config.go:199] "Starting service config controller"
	I0414 12:28:52.991790       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 12:28:52.991832       1 config.go:105] "Starting endpoint slice config controller"
	I0414 12:28:52.991836       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 12:28:52.992292       1 config.go:329] "Starting node config controller"
	I0414 12:28:52.992318       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 12:28:53.092110       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 12:28:53.092156       1 shared_informer.go:320] Caches are synced for service config
	I0414 12:28:53.092430       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2e58c4d42bf0be189248882fccc4f92d682942ad3be7b4c1dc9664fa95be5b42] <==
	I0414 12:28:48.747183       1 serving.go:386] Generated self-signed cert in-memory
	W0414 12:28:51.299622       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 12:28:51.299751       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 12:28:51.299778       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 12:28:51.299797       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 12:28:51.376394       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 12:28:51.376906       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:28:51.380553       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 12:28:51.380749       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 12:28:51.380784       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 12:28:51.385605       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 12:28:51.481305       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0414 12:29:20.039536       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d355b0cf1e9f8799ffb38b5c24435802ebadef16716cb1d1a24263db78bef5ae] <==
	I0414 12:29:31.931709       1 serving.go:386] Generated self-signed cert in-memory
	W0414 12:29:33.367647       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 12:29:33.367764       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 12:29:33.367791       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 12:29:33.367813       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 12:29:33.422199       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 12:29:33.423643       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 12:29:33.444389       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 12:29:33.445083       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 12:29:33.445125       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 12:29:33.449399       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 12:29:33.549960       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 12:40:30 functional-760045 kubelet[5711]: E0414 12:40:30.460175    5711 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7adf4ebe949d159f6be06adfa228b5a9/crio-a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77: Error finding container a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77: Status 404 returned error can't find the container with id a702732cd9018c091e6d7217db9f77937dc75c03b18d6db0b26ec08588522b77
	Apr 14 12:40:30 functional-760045 kubelet[5711]: E0414 12:40:30.460421    5711 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod480a6bd7-bf3b-4bae-b2a6-2fdebb492ac9/crio-14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e: Error finding container 14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e: Status 404 returned error can't find the container with id 14e360883425224b0e29441121c41e9f2937b0ed32887e5854c76359e57b164e
	Apr 14 12:40:30 functional-760045 kubelet[5711]: E0414 12:40:30.460670    5711 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podb300b501-5ebd-47df-a358-754ea70df398/crio-3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f: Error finding container 3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f: Status 404 returned error can't find the container with id 3b40dcff38bcb61556b92668b4d160e7c0b514624244750ec59c2cb3ab6dba6f
	Apr 14 12:40:30 functional-760045 kubelet[5711]: E0414 12:40:30.460909    5711 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod71e49f19f008c8999723c33183b5ea26/crio-2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8: Error finding container 2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8: Status 404 returned error can't find the container with id 2b2b899806d88d37089b7c1dfe3db15d4ca39b0f92f683de81b257317b1f97e8
	Apr 14 12:40:30 functional-760045 kubelet[5711]: E0414 12:40:30.654228    5711 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634430653959155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:40:30 functional-760045 kubelet[5711]: E0414 12:40:30.654254    5711 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634430653959155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:40:31 functional-760045 kubelet[5711]: E0414 12:40:31.386577    5711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="043b9cf5-ee9c-4156-8778-2420e4c78ede"
	Apr 14 12:40:32 functional-760045 kubelet[5711]: E0414 12:40:32.388231    5711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-mdpt4" podUID="7c8c4c0e-fc92-4dde-933d-2daf5e4c8526"
	Apr 14 12:40:40 functional-760045 kubelet[5711]: E0414 12:40:40.659721    5711 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634440658726345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:40:40 functional-760045 kubelet[5711]: E0414 12:40:40.659772    5711 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634440658726345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:40:44 functional-760045 kubelet[5711]: E0414 12:40:44.388969    5711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-mdpt4" podUID="7c8c4c0e-fc92-4dde-933d-2daf5e4c8526"
	Apr 14 12:40:45 functional-760045 kubelet[5711]: E0414 12:40:45.386406    5711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="043b9cf5-ee9c-4156-8778-2420e4c78ede"
	Apr 14 12:40:50 functional-760045 kubelet[5711]: E0414 12:40:50.661476    5711 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634450660803939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:40:50 functional-760045 kubelet[5711]: E0414 12:40:50.661789    5711 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634450660803939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:40:59 functional-760045 kubelet[5711]: E0414 12:40:59.386393    5711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="043b9cf5-ee9c-4156-8778-2420e4c78ede"
	Apr 14 12:41:00 functional-760045 kubelet[5711]: E0414 12:41:00.664545    5711 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634460664077534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:41:00 functional-760045 kubelet[5711]: E0414 12:41:00.664921    5711 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634460664077534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:41:10 functional-760045 kubelet[5711]: E0414 12:41:10.667306    5711 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634470666955396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:41:10 functional-760045 kubelet[5711]: E0414 12:41:10.667332    5711 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634470666955396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:41:17 functional-760045 kubelet[5711]: E0414 12:41:17.288415    5711 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Apr 14 12:41:17 functional-760045 kubelet[5711]: E0414 12:41:17.288480    5711 kuberuntime_image.go:55] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Apr 14 12:41:17 functional-760045 kubelet[5711]: E0414 12:41:17.288690    5711 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bdvxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-sv
c_default(7847bae9-503c-4fa5-8c68-1b0ad432e4a7): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 14 12:41:17 functional-760045 kubelet[5711]: E0414 12:41:17.290983    5711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7847bae9-503c-4fa5-8c68-1b0ad432e4a7"
	Apr 14 12:41:20 functional-760045 kubelet[5711]: E0414 12:41:20.670611    5711 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634480669910955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 12:41:20 functional-760045 kubelet[5711]: E0414 12:41:20.670659    5711 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744634480669910955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225024,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [3cbad5ed611c0eba69f86bcf0ab8537b10e2e840003883ff68795ab32e27c7e8] <==
	2025/04/14 12:31:54 Starting overwatch
	2025/04/14 12:31:54 Using namespace: kubernetes-dashboard
	2025/04/14 12:31:54 Using in-cluster config to connect to apiserver
	2025/04/14 12:31:54 Using secret token for csrf signing
	2025/04/14 12:31:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/04/14 12:31:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/04/14 12:31:54 Successful initial request to the apiserver, version: v1.32.2
	2025/04/14 12:31:54 Generating JWE encryption key
	2025/04/14 12:31:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/04/14 12:31:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/04/14 12:31:54 Initializing JWE encryption key from synchronized object
	2025/04/14 12:31:54 Creating in-cluster Sidecar client
	2025/04/14 12:31:54 Successful request to sidecar
	2025/04/14 12:31:54 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [55d59fc41d941c2eaf6cdebce8611d06daaafcb9eaafcc0cef5b05b315973518] <==
	I0414 12:29:04.337505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 12:29:04.348641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 12:29:04.348702       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [e530384f37b390f123f9938b79fbf3be5157db5a590a6544ca36af743bc39b1c] <==
	I0414 12:29:34.873609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 12:29:34.882296       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 12:29:34.882911       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 12:29:52.289312       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0f6f6fc-646a-4e27-8468-c31b7be52935", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-760045_f6802bd4-0f4f-4c68-ae8e-596452fbf97f became leader
	I0414 12:29:52.290111       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 12:29:52.290297       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-760045_f6802bd4-0f4f-4c68-ae8e-596452fbf97f!
	I0414 12:29:52.391211       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-760045_f6802bd4-0f4f-4c68-ae8e-596452fbf97f!
	I0414 12:30:03.367335       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0414 12:30:03.367474       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    fb3662dc-75cd-4ede-be17-9ab3d7836fef 341 0 2025-04-14 12:28:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-04-14 12:28:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-2014da10-195c-4c51-80df-c3118a12974a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  2014da10-195c-4c51-80df-c3118a12974a 762 0 2025-04-14 12:30:03 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-04-14 12:30:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-04-14 12:30:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0414 12:30:03.368120       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-2014da10-195c-4c51-80df-c3118a12974a" provisioned
	I0414 12:30:03.368173       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0414 12:30:03.368191       1 volume_store.go:212] Trying to save persistentvolume "pvc-2014da10-195c-4c51-80df-c3118a12974a"
	I0414 12:30:03.369254       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2014da10-195c-4c51-80df-c3118a12974a", APIVersion:"v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0414 12:30:03.446777       1 volume_store.go:219] persistentvolume "pvc-2014da10-195c-4c51-80df-c3118a12974a" saved
	I0414 12:30:03.453308       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2014da10-195c-4c51-80df-c3118a12974a", APIVersion:"v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-2014da10-195c-4c51-80df-c3118a12974a
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-760045 -n functional-760045
helpers_test.go:261: (dbg) Run:  kubectl --context functional-760045 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-mdpt4 nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-760045 describe pod busybox-mount mysql-58ccfd96bb-mdpt4 nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-760045 describe pod busybox-mount mysql-58ccfd96bb-mdpt4 nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-760045/192.168.39.48
	Start Time:       Mon, 14 Apr 2025 12:30:43 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d267022129b9774026c65c59fbef8a8695d761396266debf5a4d3705b0c40ac5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 14 Apr 2025 12:31:14 +0000
	      Finished:     Mon, 14 Apr 2025 12:31:14 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmts7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-mmts7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-760045
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.342s (30.125s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-mdpt4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-760045/192.168.39.48
	Start Time:       Mon, 14 Apr 2025 12:31:27 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d5h96 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-d5h96:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-58ccfd96bb-mdpt4 to functional-760045
	  Warning  Failed     2m3s (x4 over 8m19s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m3s (x4 over 8m19s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    46s (x11 over 8m18s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     46s (x11 over 8m18s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    31s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-760045/192.168.39.48
	Start Time:       Mon, 14 Apr 2025 12:29:57 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bdvxx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bdvxx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  11m                    default-scheduler  Successfully assigned default/nginx-svc to functional-760045
	  Warning  Failed     11m                    kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m12s (x5 over 11m)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m37s (x5 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     3m37s (x4 over 9m45s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m38s (x16 over 11m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    78s (x22 over 11m)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     13s                    kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-760045/192.168.39.48
	Start Time:       Mon, 14 Apr 2025 12:30:05 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6bwl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-g6bwl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/sp-pod to functional-760045
	  Warning  Failed     10m                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m6s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m6s (x4 over 9m5s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     86s (x16 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    31s (x20 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"
	  Normal   Pulling    19s (x6 over 11m)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (603.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-760045 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7847bae9-503c-4fa5-8c68-1b0ad432e4a7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-760045 -n functional-760045
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-04-14 12:33:57.713017872 +0000 UTC m=+911.646225125
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-760045 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-760045 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-760045/192.168.39.48
Start Time:       Mon, 14 Apr 2025 12:29:57 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bdvxx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bdvxx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-760045
Warning  Failed     3m27s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    103s (x3 over 4m)    kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     14s (x3 over 3m27s)  kubelet            Error: ErrImagePull
Warning  Failed     14s (x2 over 2m12s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    3s (x3 over 3m27s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     3s (x3 over 3m27s)   kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-760045 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-760045 logs nginx-svc -n default: exit status 1 (83.707826ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-760045 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (874.327554ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:361: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image load --daemon kicbase/echo-server:functional-760045 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-760045" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image load --daemon kicbase/echo-server:functional-760045 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-760045" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:252: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (862.781286ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:254: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image save kicbase/echo-server:functional-760045 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:403: expected "/home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:428: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0414 12:31:26.694480 1184889 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:31:26.694758 1184889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:31:26.694768 1184889 out.go:358] Setting ErrFile to fd 2...
	I0414 12:31:26.694772 1184889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:31:26.695032 1184889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 12:31:26.695708 1184889 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:31:26.695821 1184889 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:31:26.696271 1184889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:31:26.696345 1184889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:31:26.713565 1184889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40377
	I0414 12:31:26.714215 1184889 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:31:26.715005 1184889 main.go:141] libmachine: Using API Version  1
	I0414 12:31:26.715051 1184889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:31:26.715728 1184889 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:31:26.716024 1184889 main.go:141] libmachine: (functional-760045) Calling .GetState
	I0414 12:31:26.718631 1184889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:31:26.718694 1184889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:31:26.735621 1184889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0414 12:31:26.736183 1184889 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:31:26.736786 1184889 main.go:141] libmachine: Using API Version  1
	I0414 12:31:26.736818 1184889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:31:26.737354 1184889 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:31:26.737584 1184889 main.go:141] libmachine: (functional-760045) Calling .DriverName
	I0414 12:31:26.737946 1184889 ssh_runner.go:195] Run: systemctl --version
	I0414 12:31:26.738021 1184889 main.go:141] libmachine: (functional-760045) Calling .GetSSHHostname
	I0414 12:31:26.741300 1184889 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined MAC address 52:54:00:36:e1:9b in network mk-functional-760045
	I0414 12:31:26.741831 1184889 main.go:141] libmachine: (functional-760045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e1:9b", ip: ""} in network mk-functional-760045: {Iface:virbr1 ExpiryTime:2025-04-14 13:27:34 +0000 UTC Type:0 Mac:52:54:00:36:e1:9b Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:functional-760045 Clientid:01:52:54:00:36:e1:9b}
	I0414 12:31:26.741868 1184889 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined IP address 192.168.39.48 and MAC address 52:54:00:36:e1:9b in network mk-functional-760045
	I0414 12:31:26.742044 1184889 main.go:141] libmachine: (functional-760045) Calling .GetSSHPort
	I0414 12:31:26.742292 1184889 main.go:141] libmachine: (functional-760045) Calling .GetSSHKeyPath
	I0414 12:31:26.742488 1184889 main.go:141] libmachine: (functional-760045) Calling .GetSSHUsername
	I0414 12:31:26.742719 1184889 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/functional-760045/id_rsa Username:docker}
	I0414 12:31:26.819932 1184889 cache_images.go:289] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar
	W0414 12:31:26.820020 1184889 cache_images.go:253] Failed to load cached images for "functional-760045": loading images: stat /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I0414 12:31:26.820057 1184889 cache_images.go:265] failed pushing to: functional-760045
	I0414 12:31:26.820084 1184889 main.go:141] libmachine: Making call to close driver server
	I0414 12:31:26.820097 1184889 main.go:141] libmachine: (functional-760045) Calling .Close
	I0414 12:31:26.820448 1184889 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:31:26.820468 1184889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:31:26.820478 1184889 main.go:141] libmachine: Making call to close driver server
	I0414 12:31:26.820487 1184889 main.go:141] libmachine: (functional-760045) Calling .Close
	I0414 12:31:26.820754 1184889 main.go:141] libmachine: Successfully made call to close driver server
	I0414 12:31:26.820810 1184889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 12:31:26.820846 1184889 main.go:141] libmachine: (functional-760045) DBG | Closing plugin on server side

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-760045
functional_test.go:436: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-760045: exit status 1 (20.098069ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-760045

                                                
                                                
** /stderr **
functional_test.go:438: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-760045

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (103.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0414 12:33:57.878137 1175746 retry.go:31] will retry after 2.089335425s: Temporary Error: Get "http:": http: no Host in request URL
I0414 12:33:59.968748 1175746 retry.go:31] will retry after 5.787834328s: Temporary Error: Get "http:": http: no Host in request URL
I0414 12:34:05.757025 1175746 retry.go:31] will retry after 3.805045407s: Temporary Error: Get "http:": http: no Host in request URL
I0414 12:34:09.563020 1175746 retry.go:31] will retry after 11.19984949s: Temporary Error: Get "http:": http: no Host in request URL
I0414 12:34:20.763958 1175746 retry.go:31] will retry after 16.632972732s: Temporary Error: Get "http:": http: no Host in request URL
I0414 12:34:37.397207 1175746 retry.go:31] will retry after 22.617699742s: Temporary Error: Get "http:": http: no Host in request URL
I0414 12:35:00.015465 1175746 retry.go:31] will retry after 40.968460333s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-760045 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.105.129.27   10.105.129.27   80:31623/TCP   5m44s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (103.19s)

                                                
                                    
x
+
TestPreload (174.59s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-986059 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0414 13:24:57.146056 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-986059 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m37.746019402s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-986059 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-986059 image pull gcr.io/k8s-minikube/busybox: (3.708313975s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-986059
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-986059: (7.322359639s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-986059 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0414 13:26:49.053596 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-986059 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.338328497s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-986059 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-04-14 13:27:47.306590036 +0000 UTC m=+4141.239797202
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-986059 -n test-preload-986059
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-986059 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-986059 logs -n 25: (1.27089406s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-008349 ssh -n                                                                 | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:12 UTC | 14 Apr 25 13:12 UTC |
	|         | multinode-008349-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-008349 ssh -n multinode-008349 sudo cat                                       | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:12 UTC | 14 Apr 25 13:12 UTC |
	|         | /home/docker/cp-test_multinode-008349-m03_multinode-008349.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-008349 cp multinode-008349-m03:/home/docker/cp-test.txt                       | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:12 UTC | 14 Apr 25 13:12 UTC |
	|         | multinode-008349-m02:/home/docker/cp-test_multinode-008349-m03_multinode-008349-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-008349 ssh -n                                                                 | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:12 UTC | 14 Apr 25 13:12 UTC |
	|         | multinode-008349-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-008349 ssh -n multinode-008349-m02 sudo cat                                   | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:12 UTC | 14 Apr 25 13:12 UTC |
	|         | /home/docker/cp-test_multinode-008349-m03_multinode-008349-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-008349 node stop m03                                                          | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:12 UTC | 14 Apr 25 13:12 UTC |
	| node    | multinode-008349 node start                                                             | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:12 UTC | 14 Apr 25 13:13 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-008349                                                                | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:13 UTC |                     |
	| stop    | -p multinode-008349                                                                     | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:13 UTC | 14 Apr 25 13:16 UTC |
	| start   | -p multinode-008349                                                                     | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:16 UTC | 14 Apr 25 13:19 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-008349                                                                | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:19 UTC |                     |
	| node    | multinode-008349 node delete                                                            | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:19 UTC | 14 Apr 25 13:19 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-008349 stop                                                                   | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:19 UTC | 14 Apr 25 13:22 UTC |
	| start   | -p multinode-008349                                                                     | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:22 UTC | 14 Apr 25 13:24 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-008349                                                                | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:24 UTC |                     |
	| start   | -p multinode-008349-m02                                                                 | multinode-008349-m02 | jenkins | v1.35.0 | 14 Apr 25 13:24 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-008349-m03                                                                 | multinode-008349-m03 | jenkins | v1.35.0 | 14 Apr 25 13:24 UTC | 14 Apr 25 13:24 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-008349                                                                 | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:24 UTC |                     |
	| delete  | -p multinode-008349-m03                                                                 | multinode-008349-m03 | jenkins | v1.35.0 | 14 Apr 25 13:24 UTC | 14 Apr 25 13:24 UTC |
	| delete  | -p multinode-008349                                                                     | multinode-008349     | jenkins | v1.35.0 | 14 Apr 25 13:24 UTC | 14 Apr 25 13:24 UTC |
	| start   | -p test-preload-986059                                                                  | test-preload-986059  | jenkins | v1.35.0 | 14 Apr 25 13:24 UTC | 14 Apr 25 13:26 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-986059 image pull                                                          | test-preload-986059  | jenkins | v1.35.0 | 14 Apr 25 13:26 UTC | 14 Apr 25 13:26 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-986059                                                                  | test-preload-986059  | jenkins | v1.35.0 | 14 Apr 25 13:26 UTC | 14 Apr 25 13:26 UTC |
	| start   | -p test-preload-986059                                                                  | test-preload-986059  | jenkins | v1.35.0 | 14 Apr 25 13:26 UTC | 14 Apr 25 13:27 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-986059 image list                                                          | test-preload-986059  | jenkins | v1.35.0 | 14 Apr 25 13:27 UTC | 14 Apr 25 13:27 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 13:26:44
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 13:26:44.776132 1210207 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:26:44.776320 1210207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:26:44.776333 1210207 out.go:358] Setting ErrFile to fd 2...
	I0414 13:26:44.776339 1210207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:26:44.776596 1210207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:26:44.777266 1210207 out.go:352] Setting JSON to false
	I0414 13:26:44.778440 1210207 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18552,"bootTime":1744618653,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:26:44.778591 1210207 start.go:139] virtualization: kvm guest
	I0414 13:26:44.781571 1210207 out.go:177] * [test-preload-986059] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:26:44.783336 1210207 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 13:26:44.783327 1210207 notify.go:220] Checking for updates...
	I0414 13:26:44.785127 1210207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:26:44.787054 1210207 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:26:44.788745 1210207 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:26:44.790346 1210207 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:26:44.791763 1210207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:26:44.793658 1210207 config.go:182] Loaded profile config "test-preload-986059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 13:26:44.794157 1210207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:26:44.794262 1210207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:26:44.811574 1210207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0414 13:26:44.812190 1210207 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:26:44.812812 1210207 main.go:141] libmachine: Using API Version  1
	I0414 13:26:44.812843 1210207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:26:44.813409 1210207 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:26:44.813664 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	I0414 13:26:44.815953 1210207 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 13:26:44.817523 1210207 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:26:44.818060 1210207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:26:44.818178 1210207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:26:44.835522 1210207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42625
	I0414 13:26:44.836383 1210207 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:26:44.837173 1210207 main.go:141] libmachine: Using API Version  1
	I0414 13:26:44.837213 1210207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:26:44.837863 1210207 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:26:44.838225 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	I0414 13:26:44.883441 1210207 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 13:26:44.884892 1210207 start.go:297] selected driver: kvm2
	I0414 13:26:44.884921 1210207 start.go:901] validating driver "kvm2" against &{Name:test-preload-986059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-986059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:26:44.885083 1210207 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:26:44.886224 1210207 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:26:44.886362 1210207 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20384-1167927/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 13:26:44.904200 1210207 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 13:26:44.904785 1210207 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:26:44.904846 1210207 cni.go:84] Creating CNI manager for ""
	I0414 13:26:44.904891 1210207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:26:44.904961 1210207 start.go:340] cluster config:
	{Name:test-preload-986059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-986059 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:26:44.905110 1210207 iso.go:125] acquiring lock: {Name:mk8f809cb461c81c5ffa481f4cedc4ad92252720 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:26:44.908026 1210207 out.go:177] * Starting "test-preload-986059" primary control-plane node in "test-preload-986059" cluster
	I0414 13:26:44.909659 1210207 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 13:26:44.941168 1210207 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0414 13:26:44.941198 1210207 cache.go:56] Caching tarball of preloaded images
	I0414 13:26:44.941404 1210207 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 13:26:44.943519 1210207 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0414 13:26:44.945850 1210207 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 13:26:44.976715 1210207 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0414 13:26:48.532530 1210207 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 13:26:48.532630 1210207 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 13:26:49.408209 1210207 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0414 13:26:49.408356 1210207 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/config.json ...
	I0414 13:26:49.408587 1210207 start.go:360] acquireMachinesLock for test-preload-986059: {Name:mk1c744e856a4885e6214d755048926590b4b12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 13:26:49.408662 1210207 start.go:364] duration metric: took 51.309µs to acquireMachinesLock for "test-preload-986059"
	I0414 13:26:49.408679 1210207 start.go:96] Skipping create...Using existing machine configuration
	I0414 13:26:49.408688 1210207 fix.go:54] fixHost starting: 
	I0414 13:26:49.408987 1210207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:26:49.409032 1210207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:26:49.425397 1210207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42169
	I0414 13:26:49.426054 1210207 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:26:49.426645 1210207 main.go:141] libmachine: Using API Version  1
	I0414 13:26:49.426683 1210207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:26:49.427160 1210207 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:26:49.427415 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	I0414 13:26:49.427608 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetState
	I0414 13:26:49.429579 1210207 fix.go:112] recreateIfNeeded on test-preload-986059: state=Stopped err=<nil>
	I0414 13:26:49.429613 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	W0414 13:26:49.429796 1210207 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 13:26:49.432487 1210207 out.go:177] * Restarting existing kvm2 VM for "test-preload-986059" ...
	I0414 13:26:49.434374 1210207 main.go:141] libmachine: (test-preload-986059) Calling .Start
	I0414 13:26:49.434729 1210207 main.go:141] libmachine: (test-preload-986059) starting domain...
	I0414 13:26:49.434757 1210207 main.go:141] libmachine: (test-preload-986059) ensuring networks are active...
	I0414 13:26:49.435939 1210207 main.go:141] libmachine: (test-preload-986059) Ensuring network default is active
	I0414 13:26:49.436449 1210207 main.go:141] libmachine: (test-preload-986059) Ensuring network mk-test-preload-986059 is active
	I0414 13:26:49.437009 1210207 main.go:141] libmachine: (test-preload-986059) getting domain XML...
	I0414 13:26:49.437904 1210207 main.go:141] libmachine: (test-preload-986059) creating domain...
	I0414 13:26:50.752131 1210207 main.go:141] libmachine: (test-preload-986059) waiting for IP...
	I0414 13:26:50.753066 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:26:50.753498 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:26:50.753599 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:26:50.753502 1210260 retry.go:31] will retry after 224.466511ms: waiting for domain to come up
	I0414 13:26:50.980264 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:26:50.980971 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:26:50.981034 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:26:50.980924 1210260 retry.go:31] will retry after 270.335842ms: waiting for domain to come up
	I0414 13:26:51.252855 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:26:51.253461 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:26:51.253494 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:26:51.253433 1210260 retry.go:31] will retry after 341.292157ms: waiting for domain to come up
	I0414 13:26:51.596086 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:26:51.596704 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:26:51.596731 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:26:51.596645 1210260 retry.go:31] will retry after 596.585385ms: waiting for domain to come up
	I0414 13:26:52.194732 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:26:52.195266 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:26:52.195299 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:26:52.195225 1210260 retry.go:31] will retry after 611.307797ms: waiting for domain to come up
	I0414 13:26:52.808676 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:26:52.809262 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:26:52.809305 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:26:52.809217 1210260 retry.go:31] will retry after 943.913986ms: waiting for domain to come up
	I0414 13:26:53.754478 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:26:53.754922 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:26:53.754949 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:26:53.754877 1210260 retry.go:31] will retry after 912.216553ms: waiting for domain to come up
	I0414 13:26:54.668787 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:26:54.669281 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:26:54.669312 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:26:54.669227 1210260 retry.go:31] will retry after 1.028421961s: waiting for domain to come up
	I0414 13:26:55.699792 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:26:55.700306 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:26:55.700347 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:26:55.700259 1210260 retry.go:31] will retry after 1.613813346s: waiting for domain to come up
	I0414 13:26:57.316366 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:26:57.317039 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:26:57.317077 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:26:57.316965 1210260 retry.go:31] will retry after 1.628363405s: waiting for domain to come up
	I0414 13:26:58.948079 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:26:58.948685 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:26:58.948718 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:26:58.948611 1210260 retry.go:31] will retry after 1.945646122s: waiting for domain to come up
	I0414 13:27:00.896807 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:00.897470 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:27:00.897500 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:27:00.897448 1210260 retry.go:31] will retry after 2.877500437s: waiting for domain to come up
	I0414 13:27:03.778944 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:03.779569 1210207 main.go:141] libmachine: (test-preload-986059) DBG | unable to find current IP address of domain test-preload-986059 in network mk-test-preload-986059
	I0414 13:27:03.779599 1210207 main.go:141] libmachine: (test-preload-986059) DBG | I0414 13:27:03.779516 1210260 retry.go:31] will retry after 4.404269291s: waiting for domain to come up
	I0414 13:27:08.186252 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.186936 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has current primary IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.186973 1210207 main.go:141] libmachine: (test-preload-986059) found domain IP: 192.168.39.208
	I0414 13:27:08.186988 1210207 main.go:141] libmachine: (test-preload-986059) reserving static IP address...
	I0414 13:27:08.187725 1210207 main.go:141] libmachine: (test-preload-986059) reserved static IP address 192.168.39.208 for domain test-preload-986059
	I0414 13:27:08.187771 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "test-preload-986059", mac: "52:54:00:dc:a1:cd", ip: "192.168.39.208"} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:08.187782 1210207 main.go:141] libmachine: (test-preload-986059) waiting for SSH...
	I0414 13:27:08.187812 1210207 main.go:141] libmachine: (test-preload-986059) DBG | skip adding static IP to network mk-test-preload-986059 - found existing host DHCP lease matching {name: "test-preload-986059", mac: "52:54:00:dc:a1:cd", ip: "192.168.39.208"}
	I0414 13:27:08.187822 1210207 main.go:141] libmachine: (test-preload-986059) DBG | Getting to WaitForSSH function...
	I0414 13:27:08.192080 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.192622 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:08.192680 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.192881 1210207 main.go:141] libmachine: (test-preload-986059) DBG | Using SSH client type: external
	I0414 13:27:08.192915 1210207 main.go:141] libmachine: (test-preload-986059) DBG | Using SSH private key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/test-preload-986059/id_rsa (-rw-------)
	I0414 13:27:08.192947 1210207 main.go:141] libmachine: (test-preload-986059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/test-preload-986059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 13:27:08.192970 1210207 main.go:141] libmachine: (test-preload-986059) DBG | About to run SSH command:
	I0414 13:27:08.192988 1210207 main.go:141] libmachine: (test-preload-986059) DBG | exit 0
	I0414 13:27:08.320139 1210207 main.go:141] libmachine: (test-preload-986059) DBG | SSH cmd err, output: <nil>: 
	I0414 13:27:08.320648 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetConfigRaw
	I0414 13:27:08.321341 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetIP
	I0414 13:27:08.324813 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.325183 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:08.325213 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.325510 1210207 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/config.json ...
	I0414 13:27:08.325764 1210207 machine.go:93] provisionDockerMachine start ...
	I0414 13:27:08.325789 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	I0414 13:27:08.326064 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:08.329064 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.329605 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:08.329642 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.329910 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHPort
	I0414 13:27:08.330228 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:08.330466 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:08.330806 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHUsername
	I0414 13:27:08.331009 1210207 main.go:141] libmachine: Using SSH client type: native
	I0414 13:27:08.331345 1210207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0414 13:27:08.331360 1210207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 13:27:08.444179 1210207 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 13:27:08.444218 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetMachineName
	I0414 13:27:08.444519 1210207 buildroot.go:166] provisioning hostname "test-preload-986059"
	I0414 13:27:08.444554 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetMachineName
	I0414 13:27:08.444819 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:08.447724 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.448091 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:08.448127 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.448302 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHPort
	I0414 13:27:08.448520 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:08.448748 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:08.448882 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHUsername
	I0414 13:27:08.449102 1210207 main.go:141] libmachine: Using SSH client type: native
	I0414 13:27:08.449356 1210207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0414 13:27:08.449372 1210207 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-986059 && echo "test-preload-986059" | sudo tee /etc/hostname
	I0414 13:27:08.579480 1210207 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-986059
	
	I0414 13:27:08.579523 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:08.582540 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.582830 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:08.582860 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.583111 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHPort
	I0414 13:27:08.583349 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:08.583577 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:08.583759 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHUsername
	I0414 13:27:08.583966 1210207 main.go:141] libmachine: Using SSH client type: native
	I0414 13:27:08.584220 1210207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0414 13:27:08.584240 1210207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-986059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-986059/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-986059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:27:08.709860 1210207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:27:08.709893 1210207 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20384-1167927/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-1167927/.minikube}
	I0414 13:27:08.709920 1210207 buildroot.go:174] setting up certificates
	I0414 13:27:08.709931 1210207 provision.go:84] configureAuth start
	I0414 13:27:08.709941 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetMachineName
	I0414 13:27:08.710388 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetIP
	I0414 13:27:08.713767 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.714262 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:08.714303 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.714486 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:08.717272 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.717593 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:08.717640 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.717800 1210207 provision.go:143] copyHostCerts
	I0414 13:27:08.717882 1210207 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem, removing ...
	I0414 13:27:08.717903 1210207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem
	I0414 13:27:08.717990 1210207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem (1078 bytes)
	I0414 13:27:08.718117 1210207 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem, removing ...
	I0414 13:27:08.718128 1210207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem
	I0414 13:27:08.718155 1210207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem (1123 bytes)
	I0414 13:27:08.718210 1210207 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem, removing ...
	I0414 13:27:08.718217 1210207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem
	I0414 13:27:08.718237 1210207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem (1675 bytes)
	I0414 13:27:08.718290 1210207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem org=jenkins.test-preload-986059 san=[127.0.0.1 192.168.39.208 localhost minikube test-preload-986059]
	I0414 13:27:08.864499 1210207 provision.go:177] copyRemoteCerts
	I0414 13:27:08.864567 1210207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:27:08.864598 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:08.869430 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.869961 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:08.870002 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:08.870289 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHPort
	I0414 13:27:08.870605 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:08.870833 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHUsername
	I0414 13:27:08.871083 1210207 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/test-preload-986059/id_rsa Username:docker}
	I0414 13:27:08.959297 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:27:08.989054 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0414 13:27:09.016421 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 13:27:09.043707 1210207 provision.go:87] duration metric: took 333.758834ms to configureAuth
	I0414 13:27:09.043744 1210207 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:27:09.043957 1210207 config.go:182] Loaded profile config "test-preload-986059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 13:27:09.044066 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:09.048706 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.049180 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:09.049213 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.049483 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHPort
	I0414 13:27:09.049754 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:09.049957 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:09.050124 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHUsername
	I0414 13:27:09.050454 1210207 main.go:141] libmachine: Using SSH client type: native
	I0414 13:27:09.050746 1210207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0414 13:27:09.050767 1210207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:27:09.294408 1210207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:27:09.294439 1210207 machine.go:96] duration metric: took 968.658721ms to provisionDockerMachine
	I0414 13:27:09.294452 1210207 start.go:293] postStartSetup for "test-preload-986059" (driver="kvm2")
	I0414 13:27:09.294467 1210207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:27:09.294501 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	I0414 13:27:09.294902 1210207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:27:09.294955 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:09.298341 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.298738 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:09.298774 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.298956 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHPort
	I0414 13:27:09.299196 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:09.299329 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHUsername
	I0414 13:27:09.299445 1210207 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/test-preload-986059/id_rsa Username:docker}
	I0414 13:27:09.386707 1210207 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:27:09.391695 1210207 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:27:09.391727 1210207 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/addons for local assets ...
	I0414 13:27:09.391825 1210207 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/files for local assets ...
	I0414 13:27:09.391929 1210207 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem -> 11757462.pem in /etc/ssl/certs
	I0414 13:27:09.392055 1210207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:27:09.402679 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:27:09.431171 1210207 start.go:296] duration metric: took 136.698787ms for postStartSetup
	I0414 13:27:09.431220 1210207 fix.go:56] duration metric: took 20.022532342s for fixHost
	I0414 13:27:09.431249 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:09.434391 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.434723 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:09.434759 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.434971 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHPort
	I0414 13:27:09.435206 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:09.435368 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:09.435476 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHUsername
	I0414 13:27:09.435627 1210207 main.go:141] libmachine: Using SSH client type: native
	I0414 13:27:09.435870 1210207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0414 13:27:09.435884 1210207 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:27:09.548753 1210207 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744637229.521388067
	
	I0414 13:27:09.548778 1210207 fix.go:216] guest clock: 1744637229.521388067
	I0414 13:27:09.548789 1210207 fix.go:229] Guest: 2025-04-14 13:27:09.521388067 +0000 UTC Remote: 2025-04-14 13:27:09.431225232 +0000 UTC m=+24.699420090 (delta=90.162835ms)
	I0414 13:27:09.548820 1210207 fix.go:200] guest clock delta is within tolerance: 90.162835ms
	I0414 13:27:09.548827 1210207 start.go:83] releasing machines lock for "test-preload-986059", held for 20.140153751s
	I0414 13:27:09.548851 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	I0414 13:27:09.549169 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetIP
	I0414 13:27:09.552242 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.552629 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:09.552663 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.552842 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	I0414 13:27:09.553642 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	I0414 13:27:09.553871 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	I0414 13:27:09.553978 1210207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:27:09.554029 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:09.554118 1210207 ssh_runner.go:195] Run: cat /version.json
	I0414 13:27:09.554137 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:09.557041 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.557173 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.557542 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:09.557582 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:09.557606 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.557625 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:09.557833 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHPort
	I0414 13:27:09.558008 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHPort
	I0414 13:27:09.558100 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:09.558259 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHUsername
	I0414 13:27:09.558291 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:09.558419 1210207 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/test-preload-986059/id_rsa Username:docker}
	I0414 13:27:09.558472 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHUsername
	I0414 13:27:09.558623 1210207 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/test-preload-986059/id_rsa Username:docker}
	I0414 13:27:09.663205 1210207 ssh_runner.go:195] Run: systemctl --version
	I0414 13:27:09.669829 1210207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:27:09.818590 1210207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:27:09.826574 1210207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:27:09.826873 1210207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:27:09.845299 1210207 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 13:27:09.845337 1210207 start.go:495] detecting cgroup driver to use...
	I0414 13:27:09.845443 1210207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:27:09.865621 1210207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:27:09.885705 1210207 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:27:09.885831 1210207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:27:09.904157 1210207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:27:09.921555 1210207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:27:10.043175 1210207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:27:10.179552 1210207 docker.go:233] disabling docker service ...
	I0414 13:27:10.179631 1210207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:27:10.197343 1210207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:27:10.212398 1210207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:27:10.346219 1210207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:27:10.462931 1210207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:27:10.478079 1210207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:27:10.498554 1210207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0414 13:27:10.498638 1210207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:27:10.510760 1210207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:27:10.510856 1210207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:27:10.522850 1210207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:27:10.534573 1210207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:27:10.545506 1210207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:27:10.556721 1210207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:27:10.567689 1210207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:27:10.587270 1210207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:27:10.600917 1210207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:27:10.612077 1210207 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 13:27:10.612148 1210207 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 13:27:10.626836 1210207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:27:10.637988 1210207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:27:10.749071 1210207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:27:10.848059 1210207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:27:10.848162 1210207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:27:10.853821 1210207 start.go:563] Will wait 60s for crictl version
	I0414 13:27:10.853890 1210207 ssh_runner.go:195] Run: which crictl
	I0414 13:27:10.858632 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:27:10.899521 1210207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:27:10.899628 1210207 ssh_runner.go:195] Run: crio --version
	I0414 13:27:10.932073 1210207 ssh_runner.go:195] Run: crio --version
	I0414 13:27:10.973250 1210207 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0414 13:27:10.975370 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetIP
	I0414 13:27:10.981061 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:10.981905 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:10.981965 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:10.982261 1210207 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 13:27:10.986909 1210207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:27:11.002544 1210207 kubeadm.go:883] updating cluster {Name:test-preload-986059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-986059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:27:11.002746 1210207 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 13:27:11.002837 1210207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:27:11.053964 1210207 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0414 13:27:11.054086 1210207 ssh_runner.go:195] Run: which lz4
	I0414 13:27:11.059281 1210207 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:27:11.065399 1210207 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:27:11.065459 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0414 13:27:12.799945 1210207 crio.go:462] duration metric: took 1.740719107s to copy over tarball
	I0414 13:27:12.800057 1210207 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:27:15.456669 1210207 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.656576582s)
	I0414 13:27:15.456704 1210207 crio.go:469] duration metric: took 2.656698181s to extract the tarball
	I0414 13:27:15.456715 1210207 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:27:15.498873 1210207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:27:15.549016 1210207 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0414 13:27:15.549047 1210207 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 13:27:15.549143 1210207 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:27:15.549173 1210207 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:27:15.549173 1210207 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0414 13:27:15.549196 1210207 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0414 13:27:15.549207 1210207 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:27:15.549193 1210207 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:27:15.549232 1210207 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:27:15.549231 1210207 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:27:15.550398 1210207 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0414 13:27:15.550400 1210207 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:27:15.550462 1210207 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:27:15.550504 1210207 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:27:15.550525 1210207 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:27:15.550953 1210207 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:27:15.550961 1210207 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0414 13:27:15.550960 1210207 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:27:15.699117 1210207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:27:15.700211 1210207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0414 13:27:15.703359 1210207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:27:15.707882 1210207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:27:15.724982 1210207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:27:15.730922 1210207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:27:15.734784 1210207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0414 13:27:15.789647 1210207 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0414 13:27:15.789711 1210207 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:27:15.789766 1210207 ssh_runner.go:195] Run: which crictl
	I0414 13:27:15.835633 1210207 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0414 13:27:15.835751 1210207 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:27:15.835631 1210207 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0414 13:27:15.835856 1210207 ssh_runner.go:195] Run: which crictl
	I0414 13:27:15.835864 1210207 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0414 13:27:15.835893 1210207 ssh_runner.go:195] Run: which crictl
	I0414 13:27:15.901676 1210207 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0414 13:27:15.901725 1210207 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:27:15.901757 1210207 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0414 13:27:15.901782 1210207 ssh_runner.go:195] Run: which crictl
	I0414 13:27:15.901788 1210207 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0414 13:27:15.901814 1210207 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:27:15.901853 1210207 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0414 13:27:15.901866 1210207 ssh_runner.go:195] Run: which crictl
	I0414 13:27:15.901879 1210207 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0414 13:27:15.901897 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:27:15.901910 1210207 ssh_runner.go:195] Run: which crictl
	I0414 13:27:15.901792 1210207 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:27:15.901933 1210207 ssh_runner.go:195] Run: which crictl
	I0414 13:27:15.901974 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:27:15.901989 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 13:27:15.982592 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:27:15.982655 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 13:27:15.982721 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:27:15.982752 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:27:15.982742 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:27:15.982803 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:27:15.982841 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 13:27:16.131518 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 13:27:16.131543 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 13:27:16.147360 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 13:27:16.147430 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:27:16.147475 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:27:16.151471 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:27:16.151512 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 13:27:16.268614 1210207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0414 13:27:16.268762 1210207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 13:27:16.294501 1210207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0414 13:27:16.294618 1210207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 13:27:16.309658 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 13:27:16.309739 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 13:27:16.309758 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 13:27:16.322829 1210207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 13:27:16.322880 1210207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0414 13:27:16.322909 1210207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0414 13:27:16.322931 1210207 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 13:27:16.322963 1210207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0414 13:27:16.322971 1210207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 13:27:16.322989 1210207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0414 13:27:16.432742 1210207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0414 13:27:16.432863 1210207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0414 13:27:16.432880 1210207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0414 13:27:16.432978 1210207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0414 13:27:16.432984 1210207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0414 13:27:16.433050 1210207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0414 13:27:16.433081 1210207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 13:27:16.433136 1210207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 13:27:17.202815 1210207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:27:18.873748 1210207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.550740101s)
	I0414 13:27:18.873789 1210207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0414 13:27:18.873808 1210207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: (2.550796751s)
	I0414 13:27:18.873847 1210207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0414 13:27:18.873818 1210207 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 13:27:18.873864 1210207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.440862898s)
	I0414 13:27:18.873890 1210207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.440988283s)
	I0414 13:27:18.873913 1210207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 13:27:18.873927 1210207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0414 13:27:18.873928 1210207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0414 13:27:18.873940 1210207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.440782011s)
	I0414 13:27:18.873960 1210207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0414 13:27:18.873980 1210207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.440883246s)
	I0414 13:27:18.873999 1210207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0414 13:27:18.874023 1210207 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.671172996s)
	I0414 13:27:19.322004 1210207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0414 13:27:19.322057 1210207 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0414 13:27:19.322120 1210207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0414 13:27:21.479184 1210207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.157030156s)
	I0414 13:27:21.479225 1210207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0414 13:27:21.479278 1210207 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0414 13:27:21.479341 1210207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0414 13:27:21.626865 1210207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0414 13:27:21.626932 1210207 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0414 13:27:21.627007 1210207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0414 13:27:22.072556 1210207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0414 13:27:22.072618 1210207 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 13:27:22.072680 1210207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 13:27:22.817849 1210207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0414 13:27:22.817909 1210207 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 13:27:22.817985 1210207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 13:27:23.674501 1210207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0414 13:27:23.674566 1210207 cache_images.go:123] Successfully loaded all cached images
	I0414 13:27:23.674574 1210207 cache_images.go:92] duration metric: took 8.125510356s to LoadCachedImages
	I0414 13:27:23.674591 1210207 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.24.4 crio true true} ...
	I0414 13:27:23.674718 1210207 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-986059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-986059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 13:27:23.674810 1210207 ssh_runner.go:195] Run: crio config
	I0414 13:27:23.724020 1210207 cni.go:84] Creating CNI manager for ""
	I0414 13:27:23.724068 1210207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:27:23.724082 1210207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:27:23.724109 1210207 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-986059 NodeName:test-preload-986059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 13:27:23.724295 1210207 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-986059"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:27:23.724403 1210207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0414 13:27:23.742038 1210207 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:27:23.742140 1210207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:27:23.753376 1210207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0414 13:27:23.772459 1210207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:27:23.791259 1210207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0414 13:27:23.812029 1210207 ssh_runner.go:195] Run: grep 192.168.39.208	control-plane.minikube.internal$ /etc/hosts
	I0414 13:27:23.816533 1210207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:27:23.831136 1210207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:27:23.948764 1210207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:27:23.966797 1210207 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059 for IP: 192.168.39.208
	I0414 13:27:23.966829 1210207 certs.go:194] generating shared ca certs ...
	I0414 13:27:23.966846 1210207 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:27:23.967041 1210207 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 13:27:23.967129 1210207 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 13:27:23.967149 1210207 certs.go:256] generating profile certs ...
	I0414 13:27:23.967251 1210207 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/client.key
	I0414 13:27:23.967331 1210207 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/apiserver.key.a161bcdf
	I0414 13:27:23.967387 1210207 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/proxy-client.key
	I0414 13:27:23.967504 1210207 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem (1338 bytes)
	W0414 13:27:23.967555 1210207 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746_empty.pem, impossibly tiny 0 bytes
	I0414 13:27:23.967569 1210207 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:27:23.967597 1210207 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:27:23.967631 1210207 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:27:23.967683 1210207 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 13:27:23.967730 1210207 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:27:23.968403 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:27:24.021383 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 13:27:24.056610 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:27:24.102141 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:27:24.136460 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0414 13:27:24.168563 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 13:27:24.202429 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:27:24.252553 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 13:27:24.276598 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /usr/share/ca-certificates/11757462.pem (1708 bytes)
	I0414 13:27:24.302614 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:27:24.330718 1210207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem --> /usr/share/ca-certificates/1175746.pem (1338 bytes)
	I0414 13:27:24.357493 1210207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:27:24.376792 1210207 ssh_runner.go:195] Run: openssl version
	I0414 13:27:24.383416 1210207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1175746.pem && ln -fs /usr/share/ca-certificates/1175746.pem /etc/ssl/certs/1175746.pem"
	I0414 13:27:24.395237 1210207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1175746.pem
	I0414 13:27:24.399620 1210207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:27 /usr/share/ca-certificates/1175746.pem
	I0414 13:27:24.399702 1210207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1175746.pem
	I0414 13:27:24.405565 1210207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1175746.pem /etc/ssl/certs/51391683.0"
	I0414 13:27:24.417131 1210207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11757462.pem && ln -fs /usr/share/ca-certificates/11757462.pem /etc/ssl/certs/11757462.pem"
	I0414 13:27:24.428780 1210207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11757462.pem
	I0414 13:27:24.434464 1210207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:27 /usr/share/ca-certificates/11757462.pem
	I0414 13:27:24.434557 1210207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11757462.pem
	I0414 13:27:24.441143 1210207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11757462.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:27:24.454207 1210207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:27:24.467159 1210207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:27:24.472328 1210207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:27:24.472391 1210207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:27:24.478741 1210207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:27:24.491558 1210207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:27:24.496880 1210207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 13:27:24.503915 1210207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 13:27:24.510747 1210207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 13:27:24.517894 1210207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 13:27:24.524740 1210207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 13:27:24.531612 1210207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 13:27:24.538033 1210207 kubeadm.go:392] StartCluster: {Name:test-preload-986059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
986059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:27:24.538118 1210207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:27:24.538232 1210207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:27:24.574978 1210207 cri.go:89] found id: ""
	I0414 13:27:24.575070 1210207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:27:24.586417 1210207 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 13:27:24.586441 1210207 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 13:27:24.586494 1210207 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 13:27:24.598054 1210207 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 13:27:24.598502 1210207 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-986059" does not appear in /home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:27:24.598598 1210207 kubeconfig.go:62] /home/jenkins/minikube-integration/20384-1167927/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-986059" cluster setting kubeconfig missing "test-preload-986059" context setting]
	I0414 13:27:24.598893 1210207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/kubeconfig: {Name:mk5eb6c4765d4c70f1db00acbce88c0952cb579b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:27:24.599522 1210207 kapi.go:59] client config for test-preload-986059: &rest.Config{Host:"https://192.168.39.208:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/client.crt", KeyFile:"/home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/client.key", CAFile:"/home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 13:27:24.600026 1210207 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 13:27:24.600044 1210207 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 13:27:24.600048 1210207 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 13:27:24.600052 1210207 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 13:27:24.600399 1210207 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 13:27:24.612216 1210207 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.208
	I0414 13:27:24.612269 1210207 kubeadm.go:1160] stopping kube-system containers ...
	I0414 13:27:24.612283 1210207 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 13:27:24.612346 1210207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:27:24.651909 1210207 cri.go:89] found id: ""
	I0414 13:27:24.651995 1210207 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 13:27:24.670618 1210207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:27:24.680830 1210207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:27:24.680862 1210207 kubeadm.go:157] found existing configuration files:
	
	I0414 13:27:24.680909 1210207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:27:24.690382 1210207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:27:24.690452 1210207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:27:24.700071 1210207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:27:24.710315 1210207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:27:24.710394 1210207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:27:24.721435 1210207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:27:24.731795 1210207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:27:24.731863 1210207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:27:24.742708 1210207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:27:24.753681 1210207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:27:24.753768 1210207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:27:24.764413 1210207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:27:24.775777 1210207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:27:24.897304 1210207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:27:25.725811 1210207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:27:25.999099 1210207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:27:26.084522 1210207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:27:26.180708 1210207 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:27:26.180831 1210207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:27:26.681782 1210207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:27:27.181011 1210207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:27:27.220222 1210207 api_server.go:72] duration metric: took 1.039514714s to wait for apiserver process to appear ...
	I0414 13:27:27.220255 1210207 api_server.go:88] waiting for apiserver healthz status ...
	I0414 13:27:27.220282 1210207 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0414 13:27:27.220846 1210207 api_server.go:269] stopped: https://192.168.39.208:8443/healthz: Get "https://192.168.39.208:8443/healthz": dial tcp 192.168.39.208:8443: connect: connection refused
	I0414 13:27:27.720800 1210207 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0414 13:27:27.721536 1210207 api_server.go:269] stopped: https://192.168.39.208:8443/healthz: Get "https://192.168.39.208:8443/healthz": dial tcp 192.168.39.208:8443: connect: connection refused
	I0414 13:27:28.220872 1210207 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0414 13:27:30.733619 1210207 api_server.go:279] https://192.168.39.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 13:27:30.733677 1210207 api_server.go:103] status: https://192.168.39.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 13:27:30.733699 1210207 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0414 13:27:30.757398 1210207 api_server.go:279] https://192.168.39.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 13:27:30.757436 1210207 api_server.go:103] status: https://192.168.39.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 13:27:31.220920 1210207 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0414 13:27:31.227194 1210207 api_server.go:279] https://192.168.39.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 13:27:31.227232 1210207 api_server.go:103] status: https://192.168.39.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 13:27:31.720883 1210207 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0414 13:27:31.727826 1210207 api_server.go:279] https://192.168.39.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 13:27:31.727866 1210207 api_server.go:103] status: https://192.168.39.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 13:27:32.220539 1210207 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0414 13:27:32.226908 1210207 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I0414 13:27:32.234950 1210207 api_server.go:141] control plane version: v1.24.4
	I0414 13:27:32.234984 1210207 api_server.go:131] duration metric: took 5.014720127s to wait for apiserver health ...
	I0414 13:27:32.234995 1210207 cni.go:84] Creating CNI manager for ""
	I0414 13:27:32.235002 1210207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:27:32.236633 1210207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 13:27:32.238194 1210207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 13:27:32.253702 1210207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 13:27:32.286669 1210207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 13:27:32.293943 1210207 system_pods.go:59] 7 kube-system pods found
	I0414 13:27:32.293978 1210207 system_pods.go:61] "coredns-6d4b75cb6d-kwt6l" [bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd] Running
	I0414 13:27:32.293984 1210207 system_pods.go:61] "etcd-test-preload-986059" [35122646-28e8-42fe-b94a-3d631477cbb4] Running
	I0414 13:27:32.293987 1210207 system_pods.go:61] "kube-apiserver-test-preload-986059" [d13e1396-527a-4e27-8b7d-bb79e40591ec] Running
	I0414 13:27:32.293992 1210207 system_pods.go:61] "kube-controller-manager-test-preload-986059" [9d9a97de-a771-4a31-87c1-bcf17a0e5d60] Running
	I0414 13:27:32.294002 1210207 system_pods.go:61] "kube-proxy-q7bbm" [3f956a39-70d8-4e40-b79d-1d61c09503c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 13:27:32.294007 1210207 system_pods.go:61] "kube-scheduler-test-preload-986059" [68a1e781-ba95-486f-9ab6-528ee652765e] Running
	I0414 13:27:32.294020 1210207 system_pods.go:61] "storage-provisioner" [f8ad91bb-f016-4cb5-8493-a0ea8b1738fb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 13:27:32.294030 1210207 system_pods.go:74] duration metric: took 7.33307ms to wait for pod list to return data ...
	I0414 13:27:32.294043 1210207 node_conditions.go:102] verifying NodePressure condition ...
	I0414 13:27:32.297034 1210207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 13:27:32.297061 1210207 node_conditions.go:123] node cpu capacity is 2
	I0414 13:27:32.297074 1210207 node_conditions.go:105] duration metric: took 3.018684ms to run NodePressure ...
	I0414 13:27:32.297094 1210207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:27:32.668554 1210207 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 13:27:32.674473 1210207 retry.go:31] will retry after 286.596927ms: kubelet not initialised
	I0414 13:27:32.966914 1210207 retry.go:31] will retry after 293.419161ms: kubelet not initialised
	I0414 13:27:33.266856 1210207 retry.go:31] will retry after 763.730228ms: kubelet not initialised
	I0414 13:27:34.035510 1210207 retry.go:31] will retry after 722.717671ms: kubelet not initialised
	I0414 13:27:34.762669 1210207 retry.go:31] will retry after 968.860387ms: kubelet not initialised
	I0414 13:27:35.737229 1210207 retry.go:31] will retry after 1.074364357s: kubelet not initialised
	I0414 13:27:36.816898 1210207 kubeadm.go:739] kubelet initialised
	I0414 13:27:36.816925 1210207 kubeadm.go:740] duration metric: took 4.14833117s waiting for restarted kubelet to initialise ...
	I0414 13:27:36.816934 1210207 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:27:36.820469 1210207 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-kwt6l" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:36.825648 1210207 pod_ready.go:98] node "test-preload-986059" hosting pod "coredns-6d4b75cb6d-kwt6l" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:36.825675 1210207 pod_ready.go:82] duration metric: took 5.178018ms for pod "coredns-6d4b75cb6d-kwt6l" in "kube-system" namespace to be "Ready" ...
	E0414 13:27:36.825686 1210207 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-986059" hosting pod "coredns-6d4b75cb6d-kwt6l" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:36.825693 1210207 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:36.831449 1210207 pod_ready.go:98] node "test-preload-986059" hosting pod "etcd-test-preload-986059" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:36.831477 1210207 pod_ready.go:82] duration metric: took 5.774774ms for pod "etcd-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	E0414 13:27:36.831488 1210207 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-986059" hosting pod "etcd-test-preload-986059" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:36.831495 1210207 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:36.837237 1210207 pod_ready.go:98] node "test-preload-986059" hosting pod "kube-apiserver-test-preload-986059" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:36.837271 1210207 pod_ready.go:82] duration metric: took 5.765377ms for pod "kube-apiserver-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	E0414 13:27:36.837283 1210207 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-986059" hosting pod "kube-apiserver-test-preload-986059" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:36.837289 1210207 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:36.842095 1210207 pod_ready.go:98] node "test-preload-986059" hosting pod "kube-controller-manager-test-preload-986059" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:36.842124 1210207 pod_ready.go:82] duration metric: took 4.825492ms for pod "kube-controller-manager-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	E0414 13:27:36.842135 1210207 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-986059" hosting pod "kube-controller-manager-test-preload-986059" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:36.842142 1210207 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-q7bbm" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:37.216544 1210207 pod_ready.go:98] node "test-preload-986059" hosting pod "kube-proxy-q7bbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:37.216574 1210207 pod_ready.go:82] duration metric: took 374.423309ms for pod "kube-proxy-q7bbm" in "kube-system" namespace to be "Ready" ...
	E0414 13:27:37.216585 1210207 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-986059" hosting pod "kube-proxy-q7bbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:37.216591 1210207 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:37.616251 1210207 pod_ready.go:98] node "test-preload-986059" hosting pod "kube-scheduler-test-preload-986059" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:37.616297 1210207 pod_ready.go:82] duration metric: took 399.698475ms for pod "kube-scheduler-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	E0414 13:27:37.616312 1210207 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-986059" hosting pod "kube-scheduler-test-preload-986059" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:37.616321 1210207 pod_ready.go:39] duration metric: took 799.377519ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:27:37.616354 1210207 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 13:27:37.628607 1210207 ops.go:34] apiserver oom_adj: -16
	I0414 13:27:37.628631 1210207 kubeadm.go:597] duration metric: took 13.042184216s to restartPrimaryControlPlane
	I0414 13:27:37.628642 1210207 kubeadm.go:394] duration metric: took 13.090619353s to StartCluster
	I0414 13:27:37.628661 1210207 settings.go:142] acquiring lock: {Name:mkc68e13b098b3e7461fc88804a0aed191118bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:27:37.628735 1210207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:27:37.629339 1210207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/kubeconfig: {Name:mk5eb6c4765d4c70f1db00acbce88c0952cb579b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:27:37.629579 1210207 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:27:37.629707 1210207 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 13:27:37.629820 1210207 addons.go:69] Setting storage-provisioner=true in profile "test-preload-986059"
	I0414 13:27:37.629842 1210207 addons.go:238] Setting addon storage-provisioner=true in "test-preload-986059"
	I0414 13:27:37.629774 1210207 config.go:182] Loaded profile config "test-preload-986059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 13:27:37.629860 1210207 addons.go:69] Setting default-storageclass=true in profile "test-preload-986059"
	I0414 13:27:37.629878 1210207 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-986059"
	W0414 13:27:37.629852 1210207 addons.go:247] addon storage-provisioner should already be in state true
	I0414 13:27:37.629952 1210207 host.go:66] Checking if "test-preload-986059" exists ...
	I0414 13:27:37.630298 1210207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:27:37.630336 1210207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:27:37.630301 1210207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:27:37.630423 1210207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:27:37.631503 1210207 out.go:177] * Verifying Kubernetes components...
	I0414 13:27:37.633124 1210207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:27:37.646584 1210207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42947
	I0414 13:27:37.647118 1210207 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:27:37.647146 1210207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I0414 13:27:37.647638 1210207 main.go:141] libmachine: Using API Version  1
	I0414 13:27:37.647671 1210207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:27:37.647758 1210207 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:27:37.648111 1210207 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:27:37.648315 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetState
	I0414 13:27:37.648357 1210207 main.go:141] libmachine: Using API Version  1
	I0414 13:27:37.648383 1210207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:27:37.648790 1210207 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:27:37.649332 1210207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:27:37.649394 1210207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:27:37.651135 1210207 kapi.go:59] client config for test-preload-986059: &rest.Config{Host:"https://192.168.39.208:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/client.crt", KeyFile:"/home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/test-preload-986059/client.key", CAFile:"/home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 13:27:37.651527 1210207 addons.go:238] Setting addon default-storageclass=true in "test-preload-986059"
	W0414 13:27:37.651550 1210207 addons.go:247] addon default-storageclass should already be in state true
	I0414 13:27:37.651585 1210207 host.go:66] Checking if "test-preload-986059" exists ...
	I0414 13:27:37.651992 1210207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:27:37.652065 1210207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:27:37.667601 1210207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I0414 13:27:37.668313 1210207 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:27:37.668908 1210207 main.go:141] libmachine: Using API Version  1
	I0414 13:27:37.668942 1210207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:27:37.669381 1210207 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:27:37.669648 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetState
	I0414 13:27:37.669955 1210207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36497
	I0414 13:27:37.670552 1210207 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:27:37.671178 1210207 main.go:141] libmachine: Using API Version  1
	I0414 13:27:37.671200 1210207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:27:37.671608 1210207 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:27:37.671757 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	I0414 13:27:37.672299 1210207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:27:37.672358 1210207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:27:37.674057 1210207 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:27:37.676106 1210207 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:27:37.676146 1210207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 13:27:37.676180 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:37.680671 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:37.681281 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:37.681327 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:37.681537 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHPort
	I0414 13:27:37.681778 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:37.681952 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHUsername
	I0414 13:27:37.682137 1210207 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/test-preload-986059/id_rsa Username:docker}
	I0414 13:27:37.712445 1210207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I0414 13:27:37.713143 1210207 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:27:37.713785 1210207 main.go:141] libmachine: Using API Version  1
	I0414 13:27:37.713811 1210207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:27:37.714358 1210207 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:27:37.714600 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetState
	I0414 13:27:37.716761 1210207 main.go:141] libmachine: (test-preload-986059) Calling .DriverName
	I0414 13:27:37.717106 1210207 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 13:27:37.717129 1210207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 13:27:37.717150 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHHostname
	I0414 13:27:37.721555 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:37.722174 1210207 main.go:141] libmachine: (test-preload-986059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:a1:cd", ip: ""} in network mk-test-preload-986059: {Iface:virbr1 ExpiryTime:2025-04-14 14:27:00 +0000 UTC Type:0 Mac:52:54:00:dc:a1:cd Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:test-preload-986059 Clientid:01:52:54:00:dc:a1:cd}
	I0414 13:27:37.722216 1210207 main.go:141] libmachine: (test-preload-986059) DBG | domain test-preload-986059 has defined IP address 192.168.39.208 and MAC address 52:54:00:dc:a1:cd in network mk-test-preload-986059
	I0414 13:27:37.722493 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHPort
	I0414 13:27:37.722762 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHKeyPath
	I0414 13:27:37.722962 1210207 main.go:141] libmachine: (test-preload-986059) Calling .GetSSHUsername
	I0414 13:27:37.723182 1210207 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/test-preload-986059/id_rsa Username:docker}
	I0414 13:27:37.831689 1210207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:27:37.859131 1210207 node_ready.go:35] waiting up to 6m0s for node "test-preload-986059" to be "Ready" ...
	I0414 13:27:37.948095 1210207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:27:37.959331 1210207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 13:27:38.959835 1210207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.011686047s)
	I0414 13:27:38.959906 1210207 main.go:141] libmachine: Making call to close driver server
	I0414 13:27:38.959919 1210207 main.go:141] libmachine: (test-preload-986059) Calling .Close
	I0414 13:27:38.960269 1210207 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:27:38.960291 1210207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:27:38.960303 1210207 main.go:141] libmachine: Making call to close driver server
	I0414 13:27:38.960313 1210207 main.go:141] libmachine: (test-preload-986059) Calling .Close
	I0414 13:27:38.960576 1210207 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:27:38.960593 1210207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:27:38.960627 1210207 main.go:141] libmachine: (test-preload-986059) DBG | Closing plugin on server side
	I0414 13:27:38.971741 1210207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.012338063s)
	I0414 13:27:38.971825 1210207 main.go:141] libmachine: Making call to close driver server
	I0414 13:27:38.971839 1210207 main.go:141] libmachine: (test-preload-986059) Calling .Close
	I0414 13:27:38.972172 1210207 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:27:38.972195 1210207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:27:38.972214 1210207 main.go:141] libmachine: Making call to close driver server
	I0414 13:27:38.972223 1210207 main.go:141] libmachine: (test-preload-986059) Calling .Close
	I0414 13:27:38.972488 1210207 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:27:38.972504 1210207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:27:38.972525 1210207 main.go:141] libmachine: (test-preload-986059) DBG | Closing plugin on server side
	I0414 13:27:38.978958 1210207 main.go:141] libmachine: Making call to close driver server
	I0414 13:27:38.978989 1210207 main.go:141] libmachine: (test-preload-986059) Calling .Close
	I0414 13:27:38.979338 1210207 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:27:38.979433 1210207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:27:38.979391 1210207 main.go:141] libmachine: (test-preload-986059) DBG | Closing plugin on server side
	I0414 13:27:38.981903 1210207 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 13:27:38.983621 1210207 addons.go:514] duration metric: took 1.353918754s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 13:27:39.865924 1210207 node_ready.go:53] node "test-preload-986059" has status "Ready":"False"
	I0414 13:27:41.363751 1210207 node_ready.go:49] node "test-preload-986059" has status "Ready":"True"
	I0414 13:27:41.363780 1210207 node_ready.go:38] duration metric: took 3.504607248s for node "test-preload-986059" to be "Ready" ...
	I0414 13:27:41.363789 1210207 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:27:41.367522 1210207 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-kwt6l" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:41.372476 1210207 pod_ready.go:93] pod "coredns-6d4b75cb6d-kwt6l" in "kube-system" namespace has status "Ready":"True"
	I0414 13:27:41.372513 1210207 pod_ready.go:82] duration metric: took 4.953871ms for pod "coredns-6d4b75cb6d-kwt6l" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:41.372524 1210207 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:41.377564 1210207 pod_ready.go:93] pod "etcd-test-preload-986059" in "kube-system" namespace has status "Ready":"True"
	I0414 13:27:41.377590 1210207 pod_ready.go:82] duration metric: took 5.060611ms for pod "etcd-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:41.377599 1210207 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:43.384331 1210207 pod_ready.go:103] pod "kube-apiserver-test-preload-986059" in "kube-system" namespace has status "Ready":"False"
	I0414 13:27:45.389491 1210207 pod_ready.go:103] pod "kube-apiserver-test-preload-986059" in "kube-system" namespace has status "Ready":"False"
	I0414 13:27:46.384723 1210207 pod_ready.go:93] pod "kube-apiserver-test-preload-986059" in "kube-system" namespace has status "Ready":"True"
	I0414 13:27:46.384751 1210207 pod_ready.go:82] duration metric: took 5.007145281s for pod "kube-apiserver-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:46.384762 1210207 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:46.390550 1210207 pod_ready.go:93] pod "kube-controller-manager-test-preload-986059" in "kube-system" namespace has status "Ready":"True"
	I0414 13:27:46.390581 1210207 pod_ready.go:82] duration metric: took 5.813535ms for pod "kube-controller-manager-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:46.390594 1210207 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q7bbm" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:46.396294 1210207 pod_ready.go:93] pod "kube-proxy-q7bbm" in "kube-system" namespace has status "Ready":"True"
	I0414 13:27:46.396323 1210207 pod_ready.go:82] duration metric: took 5.721753ms for pod "kube-proxy-q7bbm" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:46.396336 1210207 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:46.402159 1210207 pod_ready.go:93] pod "kube-scheduler-test-preload-986059" in "kube-system" namespace has status "Ready":"True"
	I0414 13:27:46.402193 1210207 pod_ready.go:82] duration metric: took 5.848473ms for pod "kube-scheduler-test-preload-986059" in "kube-system" namespace to be "Ready" ...
	I0414 13:27:46.402208 1210207 pod_ready.go:39] duration metric: took 5.038405121s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:27:46.402233 1210207 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:27:46.402302 1210207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:27:46.417522 1210207 api_server.go:72] duration metric: took 8.787899448s to wait for apiserver process to appear ...
	I0414 13:27:46.417558 1210207 api_server.go:88] waiting for apiserver healthz status ...
	I0414 13:27:46.417583 1210207 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0414 13:27:46.423963 1210207 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I0414 13:27:46.425032 1210207 api_server.go:141] control plane version: v1.24.4
	I0414 13:27:46.425058 1210207 api_server.go:131] duration metric: took 7.491195ms to wait for apiserver health ...
	I0414 13:27:46.425068 1210207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 13:27:46.428755 1210207 system_pods.go:59] 7 kube-system pods found
	I0414 13:27:46.428785 1210207 system_pods.go:61] "coredns-6d4b75cb6d-kwt6l" [bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd] Running
	I0414 13:27:46.428790 1210207 system_pods.go:61] "etcd-test-preload-986059" [35122646-28e8-42fe-b94a-3d631477cbb4] Running
	I0414 13:27:46.428794 1210207 system_pods.go:61] "kube-apiserver-test-preload-986059" [d13e1396-527a-4e27-8b7d-bb79e40591ec] Running
	I0414 13:27:46.428798 1210207 system_pods.go:61] "kube-controller-manager-test-preload-986059" [9d9a97de-a771-4a31-87c1-bcf17a0e5d60] Running
	I0414 13:27:46.428801 1210207 system_pods.go:61] "kube-proxy-q7bbm" [3f956a39-70d8-4e40-b79d-1d61c09503c0] Running
	I0414 13:27:46.428804 1210207 system_pods.go:61] "kube-scheduler-test-preload-986059" [68a1e781-ba95-486f-9ab6-528ee652765e] Running
	I0414 13:27:46.428806 1210207 system_pods.go:61] "storage-provisioner" [f8ad91bb-f016-4cb5-8493-a0ea8b1738fb] Running
	I0414 13:27:46.428811 1210207 system_pods.go:74] duration metric: took 3.739187ms to wait for pod list to return data ...
	I0414 13:27:46.428820 1210207 default_sa.go:34] waiting for default service account to be created ...
	I0414 13:27:46.563029 1210207 default_sa.go:45] found service account: "default"
	I0414 13:27:46.563062 1210207 default_sa.go:55] duration metric: took 134.236016ms for default service account to be created ...
	I0414 13:27:46.563074 1210207 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 13:27:46.764613 1210207 system_pods.go:86] 7 kube-system pods found
	I0414 13:27:46.764654 1210207 system_pods.go:89] "coredns-6d4b75cb6d-kwt6l" [bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd] Running
	I0414 13:27:46.764661 1210207 system_pods.go:89] "etcd-test-preload-986059" [35122646-28e8-42fe-b94a-3d631477cbb4] Running
	I0414 13:27:46.764665 1210207 system_pods.go:89] "kube-apiserver-test-preload-986059" [d13e1396-527a-4e27-8b7d-bb79e40591ec] Running
	I0414 13:27:46.764669 1210207 system_pods.go:89] "kube-controller-manager-test-preload-986059" [9d9a97de-a771-4a31-87c1-bcf17a0e5d60] Running
	I0414 13:27:46.764673 1210207 system_pods.go:89] "kube-proxy-q7bbm" [3f956a39-70d8-4e40-b79d-1d61c09503c0] Running
	I0414 13:27:46.764679 1210207 system_pods.go:89] "kube-scheduler-test-preload-986059" [68a1e781-ba95-486f-9ab6-528ee652765e] Running
	I0414 13:27:46.764682 1210207 system_pods.go:89] "storage-provisioner" [f8ad91bb-f016-4cb5-8493-a0ea8b1738fb] Running
	I0414 13:27:46.764689 1210207 system_pods.go:126] duration metric: took 201.6099ms to wait for k8s-apps to be running ...
	I0414 13:27:46.764697 1210207 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 13:27:46.764760 1210207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:27:46.781350 1210207 system_svc.go:56] duration metric: took 16.63937ms WaitForService to wait for kubelet
	I0414 13:27:46.781390 1210207 kubeadm.go:582] duration metric: took 9.151775807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:27:46.781411 1210207 node_conditions.go:102] verifying NodePressure condition ...
	I0414 13:27:46.964452 1210207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 13:27:46.964502 1210207 node_conditions.go:123] node cpu capacity is 2
	I0414 13:27:46.964520 1210207 node_conditions.go:105] duration metric: took 183.1044ms to run NodePressure ...
	I0414 13:27:46.964538 1210207 start.go:241] waiting for startup goroutines ...
	I0414 13:27:46.964547 1210207 start.go:246] waiting for cluster config update ...
	I0414 13:27:46.964564 1210207 start.go:255] writing updated cluster config ...
	I0414 13:27:46.964872 1210207 ssh_runner.go:195] Run: rm -f paused
	I0414 13:27:47.019725 1210207 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0414 13:27:47.022581 1210207 out.go:201] 
	W0414 13:27:47.024489 1210207 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0414 13:27:47.026443 1210207 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0414 13:27:47.028375 1210207 out.go:177] * Done! kubectl is now configured to use "test-preload-986059" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.031529226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744637268031502436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2097d1f5-9d44-40c0-bd7c-fa3c6436ff6b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.032897479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=badcc283-ed7d-4821-abda-b99b777fb05e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.032973412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=badcc283-ed7d-4821-abda-b99b777fb05e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.033164212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:670f3a17765b7df5cd9b17bcaf2f6aca83959fcaa184faba20885dd4d96d7506,PodSandboxId:79006065d6efdece85e7c327e667e733713f4612d2861851a95353d147fc6824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744637265270393422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8ad91bb-f016-4cb5-8493-a0ea8b1738fb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cb689e3,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c215de29b3296ffbf331cdd2eadc363dbe09d137ef5513f5080ea0ab29cbafba,PodSandboxId:3d2583783449fd9c46b582364abc36c2e293b4744e22df2ae6b32906948a277a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744637259615914543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kwt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3653f5bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f47b16d203eec7a2b15316d290921a1302ae15ce17a561d282d90f55e2581908,PodSandboxId:44b0a2135ea5b787d6d00f3875128e52c80333c4b1dfb88ee7edd827a87f2593,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744637252517744070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7bbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
956a39-70d8-4e40-b79d-1d61c09503c0,},Annotations:map[string]string{io.kubernetes.container.hash: ea9df5b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb2c95a659f4962fb48aa9a6551cbd79a92acd4ebd192c42e969cd3a9938ff5e,PodSandboxId:79006065d6efdece85e7c327e667e733713f4612d2861851a95353d147fc6824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744637252308085464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8ad91bb-f016-4
cb5-8493-a0ea8b1738fb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cb689e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5001f30d4ace70077497c69535feaa811e03f7dd51a14b8cf68d11bb448cd64a,PodSandboxId:76e73451db02e82196aa770cdd3bf7422ff6742b75a004fc45b36266140cc3c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744637246965167012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a57098e2cab1685da8fb
c6599806f6,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1525230c2cfa380d8c01a28fc2c3fcdf5655c41505fe003f0921baa9c2f30c91,PodSandboxId:d5988d13ff51f43004a644f9c1102728099dac2be01811b5b4b0fe407319f5a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744637246906975539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67474be7df72d1354e42b89b06a0d7ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 60e14c96,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2277f4ad2d9ecddca093fc30bd5a088ad1e5476074043b42f810f92132b3ccd6,PodSandboxId:91db55b60dcd01a347ac0e81b3c7328767eee003925cd7cd0dd25e9dd0077353,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744637246878961578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0be70584bf135404c2228cc7641f930,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da0e07c7420187060e8e97fdc482e7ad449db0f1e51531c7d37f489104314636,PodSandboxId:6f09ec54695c367fe7d5488e72b6c729afa93cc135bc7de150d658ab7a5dfabb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744637246827749923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f123b3c03655a902147d1e7085058374,},Annotations:map[string]
string{io.kubernetes.container.hash: 90c6c47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=badcc283-ed7d-4821-abda-b99b777fb05e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.074947781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd8f5e37-2dfc-4ade-b4fe-7febe44a4529 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.075049088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd8f5e37-2dfc-4ade-b4fe-7febe44a4529 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.076661503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ada74143-f94a-482b-ba64-7c30b0988f3f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.077149930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744637268077120135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ada74143-f94a-482b-ba64-7c30b0988f3f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.077991424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87f546c9-d496-483c-b0df-bf1e7dfaf3b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.078078751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87f546c9-d496-483c-b0df-bf1e7dfaf3b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.078272767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:670f3a17765b7df5cd9b17bcaf2f6aca83959fcaa184faba20885dd4d96d7506,PodSandboxId:79006065d6efdece85e7c327e667e733713f4612d2861851a95353d147fc6824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744637265270393422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8ad91bb-f016-4cb5-8493-a0ea8b1738fb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cb689e3,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c215de29b3296ffbf331cdd2eadc363dbe09d137ef5513f5080ea0ab29cbafba,PodSandboxId:3d2583783449fd9c46b582364abc36c2e293b4744e22df2ae6b32906948a277a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744637259615914543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kwt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3653f5bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f47b16d203eec7a2b15316d290921a1302ae15ce17a561d282d90f55e2581908,PodSandboxId:44b0a2135ea5b787d6d00f3875128e52c80333c4b1dfb88ee7edd827a87f2593,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744637252517744070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7bbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
956a39-70d8-4e40-b79d-1d61c09503c0,},Annotations:map[string]string{io.kubernetes.container.hash: ea9df5b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb2c95a659f4962fb48aa9a6551cbd79a92acd4ebd192c42e969cd3a9938ff5e,PodSandboxId:79006065d6efdece85e7c327e667e733713f4612d2861851a95353d147fc6824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744637252308085464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8ad91bb-f016-4
cb5-8493-a0ea8b1738fb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cb689e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5001f30d4ace70077497c69535feaa811e03f7dd51a14b8cf68d11bb448cd64a,PodSandboxId:76e73451db02e82196aa770cdd3bf7422ff6742b75a004fc45b36266140cc3c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744637246965167012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a57098e2cab1685da8fb
c6599806f6,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1525230c2cfa380d8c01a28fc2c3fcdf5655c41505fe003f0921baa9c2f30c91,PodSandboxId:d5988d13ff51f43004a644f9c1102728099dac2be01811b5b4b0fe407319f5a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744637246906975539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67474be7df72d1354e42b89b06a0d7ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 60e14c96,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2277f4ad2d9ecddca093fc30bd5a088ad1e5476074043b42f810f92132b3ccd6,PodSandboxId:91db55b60dcd01a347ac0e81b3c7328767eee003925cd7cd0dd25e9dd0077353,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744637246878961578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0be70584bf135404c2228cc7641f930,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da0e07c7420187060e8e97fdc482e7ad449db0f1e51531c7d37f489104314636,PodSandboxId:6f09ec54695c367fe7d5488e72b6c729afa93cc135bc7de150d658ab7a5dfabb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744637246827749923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f123b3c03655a902147d1e7085058374,},Annotations:map[string]
string{io.kubernetes.container.hash: 90c6c47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87f546c9-d496-483c-b0df-bf1e7dfaf3b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.124013855Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe4d8efb-4dd1-4c0e-82db-dcb3d96b66b4 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.124086565Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe4d8efb-4dd1-4c0e-82db-dcb3d96b66b4 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.125157938Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74d4b2dd-9b83-4826-9a9f-652f917abf76 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.125684357Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744637268125658012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74d4b2dd-9b83-4826-9a9f-652f917abf76 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.126198432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e460c180-de87-43a5-88c7-6d8d7c16a8db name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.126273872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e460c180-de87-43a5-88c7-6d8d7c16a8db name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.126447870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:670f3a17765b7df5cd9b17bcaf2f6aca83959fcaa184faba20885dd4d96d7506,PodSandboxId:79006065d6efdece85e7c327e667e733713f4612d2861851a95353d147fc6824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744637265270393422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8ad91bb-f016-4cb5-8493-a0ea8b1738fb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cb689e3,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c215de29b3296ffbf331cdd2eadc363dbe09d137ef5513f5080ea0ab29cbafba,PodSandboxId:3d2583783449fd9c46b582364abc36c2e293b4744e22df2ae6b32906948a277a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744637259615914543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kwt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3653f5bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f47b16d203eec7a2b15316d290921a1302ae15ce17a561d282d90f55e2581908,PodSandboxId:44b0a2135ea5b787d6d00f3875128e52c80333c4b1dfb88ee7edd827a87f2593,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744637252517744070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7bbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
956a39-70d8-4e40-b79d-1d61c09503c0,},Annotations:map[string]string{io.kubernetes.container.hash: ea9df5b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb2c95a659f4962fb48aa9a6551cbd79a92acd4ebd192c42e969cd3a9938ff5e,PodSandboxId:79006065d6efdece85e7c327e667e733713f4612d2861851a95353d147fc6824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744637252308085464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8ad91bb-f016-4
cb5-8493-a0ea8b1738fb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cb689e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5001f30d4ace70077497c69535feaa811e03f7dd51a14b8cf68d11bb448cd64a,PodSandboxId:76e73451db02e82196aa770cdd3bf7422ff6742b75a004fc45b36266140cc3c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744637246965167012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a57098e2cab1685da8fb
c6599806f6,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1525230c2cfa380d8c01a28fc2c3fcdf5655c41505fe003f0921baa9c2f30c91,PodSandboxId:d5988d13ff51f43004a644f9c1102728099dac2be01811b5b4b0fe407319f5a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744637246906975539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67474be7df72d1354e42b89b06a0d7ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 60e14c96,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2277f4ad2d9ecddca093fc30bd5a088ad1e5476074043b42f810f92132b3ccd6,PodSandboxId:91db55b60dcd01a347ac0e81b3c7328767eee003925cd7cd0dd25e9dd0077353,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744637246878961578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0be70584bf135404c2228cc7641f930,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da0e07c7420187060e8e97fdc482e7ad449db0f1e51531c7d37f489104314636,PodSandboxId:6f09ec54695c367fe7d5488e72b6c729afa93cc135bc7de150d658ab7a5dfabb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744637246827749923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f123b3c03655a902147d1e7085058374,},Annotations:map[string]
string{io.kubernetes.container.hash: 90c6c47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e460c180-de87-43a5-88c7-6d8d7c16a8db name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.164071208Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1329f9b-2302-4ab6-ace9-326beb315535 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.164150901Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1329f9b-2302-4ab6-ace9-326beb315535 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.165818039Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e16cb3f7-dc9c-42bf-a7d2-06cf6e6db38d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.166395826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744637268166364694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e16cb3f7-dc9c-42bf-a7d2-06cf6e6db38d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.167642755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=980352d5-4915-4e3f-91fc-c0a78bb6ac6f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.167724374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=980352d5-4915-4e3f-91fc-c0a78bb6ac6f name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:27:48 test-preload-986059 crio[692]: time="2025-04-14 13:27:48.167910465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:670f3a17765b7df5cd9b17bcaf2f6aca83959fcaa184faba20885dd4d96d7506,PodSandboxId:79006065d6efdece85e7c327e667e733713f4612d2861851a95353d147fc6824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744637265270393422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8ad91bb-f016-4cb5-8493-a0ea8b1738fb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cb689e3,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c215de29b3296ffbf331cdd2eadc363dbe09d137ef5513f5080ea0ab29cbafba,PodSandboxId:3d2583783449fd9c46b582364abc36c2e293b4744e22df2ae6b32906948a277a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744637259615914543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kwt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3653f5bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f47b16d203eec7a2b15316d290921a1302ae15ce17a561d282d90f55e2581908,PodSandboxId:44b0a2135ea5b787d6d00f3875128e52c80333c4b1dfb88ee7edd827a87f2593,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744637252517744070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7bbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f
956a39-70d8-4e40-b79d-1d61c09503c0,},Annotations:map[string]string{io.kubernetes.container.hash: ea9df5b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb2c95a659f4962fb48aa9a6551cbd79a92acd4ebd192c42e969cd3a9938ff5e,PodSandboxId:79006065d6efdece85e7c327e667e733713f4612d2861851a95353d147fc6824,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744637252308085464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8ad91bb-f016-4
cb5-8493-a0ea8b1738fb,},Annotations:map[string]string{io.kubernetes.container.hash: 4cb689e3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5001f30d4ace70077497c69535feaa811e03f7dd51a14b8cf68d11bb448cd64a,PodSandboxId:76e73451db02e82196aa770cdd3bf7422ff6742b75a004fc45b36266140cc3c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744637246965167012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a57098e2cab1685da8fb
c6599806f6,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1525230c2cfa380d8c01a28fc2c3fcdf5655c41505fe003f0921baa9c2f30c91,PodSandboxId:d5988d13ff51f43004a644f9c1102728099dac2be01811b5b4b0fe407319f5a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744637246906975539,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67474be7df72d1354e42b89b06a0d7ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 60e14c96,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2277f4ad2d9ecddca093fc30bd5a088ad1e5476074043b42f810f92132b3ccd6,PodSandboxId:91db55b60dcd01a347ac0e81b3c7328767eee003925cd7cd0dd25e9dd0077353,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744637246878961578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0be70584bf135404c2228cc7641f930,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da0e07c7420187060e8e97fdc482e7ad449db0f1e51531c7d37f489104314636,PodSandboxId:6f09ec54695c367fe7d5488e72b6c729afa93cc135bc7de150d658ab7a5dfabb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744637246827749923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-986059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f123b3c03655a902147d1e7085058374,},Annotations:map[string]
string{io.kubernetes.container.hash: 90c6c47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=980352d5-4915-4e3f-91fc-c0a78bb6ac6f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	670f3a17765b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 seconds ago       Running             storage-provisioner       3                   79006065d6efd       storage-provisioner
	c215de29b3296       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   3d2583783449f       coredns-6d4b75cb6d-kwt6l
	f47b16d203eec       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   44b0a2135ea5b       kube-proxy-q7bbm
	bb2c95a659f49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       2                   79006065d6efd       storage-provisioner
	5001f30d4ace7       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   76e73451db02e       kube-scheduler-test-preload-986059
	1525230c2cfa3       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   d5988d13ff51f       etcd-test-preload-986059
	2277f4ad2d9ec       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   91db55b60dcd0       kube-controller-manager-test-preload-986059
	da0e07c742018       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   6f09ec54695c3       kube-apiserver-test-preload-986059
	
	
	==> coredns [c215de29b3296ffbf331cdd2eadc363dbe09d137ef5513f5080ea0ab29cbafba] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:58054 - 21536 "HINFO IN 8003171322317227663.5101833250078117456. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023711396s
	
	
	==> describe nodes <==
	Name:               test-preload-986059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-986059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9d10b41d083ee2064b1b8c7e16503e13b1847696
	                    minikube.k8s.io/name=test-preload-986059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T13_26_16_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 13:26:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-986059
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 13:27:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 13:27:40 +0000   Mon, 14 Apr 2025 13:26:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 13:27:40 +0000   Mon, 14 Apr 2025 13:26:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 13:27:40 +0000   Mon, 14 Apr 2025 13:26:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 13:27:40 +0000   Mon, 14 Apr 2025 13:27:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    test-preload-986059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebbcde3b7efd46c29a977ff38be57c0e
	  System UUID:                ebbcde3b-7efd-46c2-9a97-7ff38be57c0e
	  Boot ID:                    635ed43c-c747-4847-ae4c-f9f59c4bc08e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-kwt6l                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     79s
	  kube-system                 etcd-test-preload-986059                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         92s
	  kube-system                 kube-apiserver-test-preload-986059             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-test-preload-986059    200m (10%)    0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-proxy-q7bbm                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-test-preload-986059             100m (5%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 77s                kube-proxy       
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node test-preload-986059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node test-preload-986059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s                kubelet          Node test-preload-986059 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                81s                kubelet          Node test-preload-986059 status is now: NodeReady
	  Normal  RegisteredNode           80s                node-controller  Node test-preload-986059 event: Registered Node test-preload-986059 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-986059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-986059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-986059 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-986059 event: Registered Node test-preload-986059 in Controller
	
	
	==> dmesg <==
	[Apr14 13:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049638] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038939] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.975217] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Apr14 13:27] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.477264] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.998032] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.059392] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057329] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.168771] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.136775] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.282394] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[ +13.202670] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +0.058293] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.975827] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +6.177513] kauditd_printk_skb: 105 callbacks suppressed
	[  +5.629291] systemd-fstab-generator[1856]: Ignoring "noauto" option for root device
	[  +0.099402] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.958612] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [1525230c2cfa380d8c01a28fc2c3fcdf5655c41505fe003f0921baa9c2f30c91] <==
	{"level":"info","ts":"2025-04-14T13:27:27.324Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"7fe6bf77aaafe0f6","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-14T13:27:27.329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 switched to configuration voters=(9216264208145965302)"}
	{"level":"info","ts":"2025-04-14T13:27:27.330Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fb8a78b66dce1ac7","local-member-id":"7fe6bf77aaafe0f6","added-peer-id":"7fe6bf77aaafe0f6","added-peer-peer-urls":["https://192.168.39.208:2380"]}
	{"level":"info","ts":"2025-04-14T13:27:27.330Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fb8a78b66dce1ac7","local-member-id":"7fe6bf77aaafe0f6","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T13:27:27.330Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T13:27:27.338Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T13:27:27.339Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7fe6bf77aaafe0f6","initial-advertise-peer-urls":["https://192.168.39.208:2380"],"listen-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.208:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T13:27:27.339Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T13:27:27.342Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"7fe6bf77aaafe0f6","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-04-14T13:27:27.342Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2025-04-14T13:27:27.342Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2025-04-14T13:27:28.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T13:27:28.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T13:27:28.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgPreVoteResp from 7fe6bf77aaafe0f6 at term 2"}
	{"level":"info","ts":"2025-04-14T13:27:28.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T13:27:28.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgVoteResp from 7fe6bf77aaafe0f6 at term 3"}
	{"level":"info","ts":"2025-04-14T13:27:28.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became leader at term 3"}
	{"level":"info","ts":"2025-04-14T13:27:28.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7fe6bf77aaafe0f6 elected leader 7fe6bf77aaafe0f6 at term 3"}
	{"level":"info","ts":"2025-04-14T13:27:28.095Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"7fe6bf77aaafe0f6","local-member-attributes":"{Name:test-preload-986059 ClientURLs:[https://192.168.39.208:2379]}","request-path":"/0/members/7fe6bf77aaafe0f6/attributes","cluster-id":"fb8a78b66dce1ac7","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T13:27:28.095Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T13:27:28.097Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.208:2379"}
	{"level":"info","ts":"2025-04-14T13:27:28.109Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T13:27:28.115Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T13:27:28.115Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T13:27:28.116Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:27:48 up 0 min,  0 users,  load average: 1.23, 0.35, 0.12
	Linux test-preload-986059 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [da0e07c7420187060e8e97fdc482e7ad449db0f1e51531c7d37f489104314636] <==
	I0414 13:27:30.686409       1 establishing_controller.go:76] Starting EstablishingController
	I0414 13:27:30.686804       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0414 13:27:30.686870       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0414 13:27:30.686911       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0414 13:27:30.709692       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0414 13:27:30.714503       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0414 13:27:30.785987       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0414 13:27:30.786251       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0414 13:27:30.818414       1 shared_informer.go:262] Caches are synced for node_authorizer
	E0414 13:27:30.824457       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0414 13:27:30.862553       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 13:27:30.881384       1 cache.go:39] Caches are synced for autoregister controller
	I0414 13:27:30.881790       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0414 13:27:30.885295       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 13:27:30.908544       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0414 13:27:31.363146       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0414 13:27:31.690772       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 13:27:32.505846       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0414 13:27:32.527002       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0414 13:27:32.593493       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0414 13:27:32.630289       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 13:27:32.642433       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 13:27:32.873064       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0414 13:27:43.680024       1 controller.go:611] quota admission added evaluator for: endpoints
	I0414 13:27:43.725761       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2277f4ad2d9ecddca093fc30bd5a088ad1e5476074043b42f810f92132b3ccd6] <==
	I0414 13:27:43.511337       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0414 13:27:43.511288       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0414 13:27:43.511485       1 event.go:294] "Event occurred" object="test-preload-986059" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-986059 event: Registered Node test-preload-986059 in Controller"
	I0414 13:27:43.522628       1 shared_informer.go:262] Caches are synced for daemon sets
	I0414 13:27:43.522743       1 shared_informer.go:262] Caches are synced for job
	I0414 13:27:43.523901       1 shared_informer.go:262] Caches are synced for PVC protection
	I0414 13:27:43.542604       1 shared_informer.go:262] Caches are synced for node
	I0414 13:27:43.542650       1 range_allocator.go:173] Starting range CIDR allocator
	I0414 13:27:43.542655       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0414 13:27:43.542664       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0414 13:27:43.545208       1 shared_informer.go:262] Caches are synced for endpoint
	I0414 13:27:43.550200       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0414 13:27:43.551290       1 shared_informer.go:262] Caches are synced for stateful set
	I0414 13:27:43.556217       1 shared_informer.go:262] Caches are synced for TTL
	I0414 13:27:43.586841       1 shared_informer.go:262] Caches are synced for expand
	I0414 13:27:43.606405       1 shared_informer.go:262] Caches are synced for HPA
	I0414 13:27:43.622213       1 shared_informer.go:262] Caches are synced for persistent volume
	I0414 13:27:43.628823       1 shared_informer.go:262] Caches are synced for attach detach
	I0414 13:27:43.666347       1 shared_informer.go:262] Caches are synced for PV protection
	I0414 13:27:43.673694       1 shared_informer.go:262] Caches are synced for cronjob
	I0414 13:27:43.754791       1 shared_informer.go:262] Caches are synced for resource quota
	I0414 13:27:43.761086       1 shared_informer.go:262] Caches are synced for resource quota
	I0414 13:27:44.172553       1 shared_informer.go:262] Caches are synced for garbage collector
	I0414 13:27:44.172627       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0414 13:27:44.187430       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [f47b16d203eec7a2b15316d290921a1302ae15ce17a561d282d90f55e2581908] <==
	I0414 13:27:32.805990       1 node.go:163] Successfully retrieved node IP: 192.168.39.208
	I0414 13:27:32.806138       1 server_others.go:138] "Detected node IP" address="192.168.39.208"
	I0414 13:27:32.806204       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0414 13:27:32.854113       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0414 13:27:32.854233       1 server_others.go:206] "Using iptables Proxier"
	I0414 13:27:32.857388       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0414 13:27:32.858313       1 server.go:661] "Version info" version="v1.24.4"
	I0414 13:27:32.858383       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 13:27:32.861739       1 config.go:317] "Starting service config controller"
	I0414 13:27:32.862016       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0414 13:27:32.862065       1 config.go:226] "Starting endpoint slice config controller"
	I0414 13:27:32.862072       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0414 13:27:32.866514       1 config.go:444] "Starting node config controller"
	I0414 13:27:32.866664       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0414 13:27:32.962605       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0414 13:27:32.962664       1 shared_informer.go:262] Caches are synced for service config
	I0414 13:27:32.968223       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [5001f30d4ace70077497c69535feaa811e03f7dd51a14b8cf68d11bb448cd64a] <==
	I0414 13:27:27.892016       1 serving.go:348] Generated self-signed cert in-memory
	W0414 13:27:30.739743       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 13:27:30.740035       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 13:27:30.740073       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 13:27:30.740082       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 13:27:30.789118       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0414 13:27:30.789162       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 13:27:30.804658       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0414 13:27:30.804898       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 13:27:30.804952       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 13:27:30.804975       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0414 13:27:30.906019       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 13:27:31 test-preload-986059 kubelet[1145]: I0414 13:27:31.623980    1145 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxz6j\" (UniqueName: \"kubernetes.io/projected/0b97e964-7ac8-4def-8311-0be05702a036-kube-api-access-lxz6j\") pod \"0b97e964-7ac8-4def-8311-0be05702a036\" (UID: \"0b97e964-7ac8-4def-8311-0be05702a036\") "
	Apr 14 13:27:31 test-preload-986059 kubelet[1145]: E0414 13:27:31.625164    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 13:27:31 test-preload-986059 kubelet[1145]: E0414 13:27:31.625257    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd-config-volume podName:bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd nodeName:}" failed. No retries permitted until 2025-04-14 13:27:32.125235548 +0000 UTC m=+6.134721110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd-config-volume") pod "coredns-6d4b75cb6d-kwt6l" (UID: "bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd") : object "kube-system"/"coredns" not registered
	Apr 14 13:27:31 test-preload-986059 kubelet[1145]: W0414 13:27:31.625640    1145 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/0b97e964-7ac8-4def-8311-0be05702a036/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Apr 14 13:27:31 test-preload-986059 kubelet[1145]: W0414 13:27:31.625744    1145 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/0b97e964-7ac8-4def-8311-0be05702a036/volumes/kubernetes.io~projected/kube-api-access-lxz6j: clearQuota called, but quotas disabled
	Apr 14 13:27:31 test-preload-986059 kubelet[1145]: I0414 13:27:31.625997    1145 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b97e964-7ac8-4def-8311-0be05702a036-kube-api-access-lxz6j" (OuterVolumeSpecName: "kube-api-access-lxz6j") pod "0b97e964-7ac8-4def-8311-0be05702a036" (UID: "0b97e964-7ac8-4def-8311-0be05702a036"). InnerVolumeSpecName "kube-api-access-lxz6j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 14 13:27:31 test-preload-986059 kubelet[1145]: I0414 13:27:31.626368    1145 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b97e964-7ac8-4def-8311-0be05702a036-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b97e964-7ac8-4def-8311-0be05702a036" (UID: "0b97e964-7ac8-4def-8311-0be05702a036"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Apr 14 13:27:31 test-preload-986059 kubelet[1145]: I0414 13:27:31.724915    1145 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b97e964-7ac8-4def-8311-0be05702a036-config-volume\") on node \"test-preload-986059\" DevicePath \"\""
	Apr 14 13:27:31 test-preload-986059 kubelet[1145]: I0414 13:27:31.725058    1145 reconciler.go:384] "Volume detached for volume \"kube-api-access-lxz6j\" (UniqueName: \"kubernetes.io/projected/0b97e964-7ac8-4def-8311-0be05702a036-kube-api-access-lxz6j\") on node \"test-preload-986059\" DevicePath \"\""
	Apr 14 13:27:32 test-preload-986059 kubelet[1145]: E0414 13:27:32.127255    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 13:27:32 test-preload-986059 kubelet[1145]: E0414 13:27:32.127324    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd-config-volume podName:bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd nodeName:}" failed. No retries permitted until 2025-04-14 13:27:33.127310038 +0000 UTC m=+7.136795597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd-config-volume") pod "coredns-6d4b75cb6d-kwt6l" (UID: "bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd") : object "kube-system"/"coredns" not registered
	Apr 14 13:27:32 test-preload-986059 kubelet[1145]: I0414 13:27:32.267682    1145 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0b97e964-7ac8-4def-8311-0be05702a036 path="/var/lib/kubelet/pods/0b97e964-7ac8-4def-8311-0be05702a036/volumes"
	Apr 14 13:27:32 test-preload-986059 kubelet[1145]: I0414 13:27:32.300823    1145 scope.go:110] "RemoveContainer" containerID="931488ad0fab003c539f0bdd5e226d8b5a75900c24725c30636463b97663aefd"
	Apr 14 13:27:33 test-preload-986059 kubelet[1145]: E0414 13:27:33.133893    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 13:27:33 test-preload-986059 kubelet[1145]: E0414 13:27:33.134067    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd-config-volume podName:bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd nodeName:}" failed. No retries permitted until 2025-04-14 13:27:35.134000656 +0000 UTC m=+9.143486223 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd-config-volume") pod "coredns-6d4b75cb6d-kwt6l" (UID: "bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd") : object "kube-system"/"coredns" not registered
	Apr 14 13:27:33 test-preload-986059 kubelet[1145]: E0414 13:27:33.256972    1145 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-kwt6l" podUID=bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd
	Apr 14 13:27:33 test-preload-986059 kubelet[1145]: I0414 13:27:33.310357    1145 scope.go:110] "RemoveContainer" containerID="931488ad0fab003c539f0bdd5e226d8b5a75900c24725c30636463b97663aefd"
	Apr 14 13:27:33 test-preload-986059 kubelet[1145]: I0414 13:27:33.310743    1145 scope.go:110] "RemoveContainer" containerID="bb2c95a659f4962fb48aa9a6551cbd79a92acd4ebd192c42e969cd3a9938ff5e"
	Apr 14 13:27:33 test-preload-986059 kubelet[1145]: E0414 13:27:33.310926    1145 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f8ad91bb-f016-4cb5-8493-a0ea8b1738fb)\"" pod="kube-system/storage-provisioner" podUID=f8ad91bb-f016-4cb5-8493-a0ea8b1738fb
	Apr 14 13:27:34 test-preload-986059 kubelet[1145]: I0414 13:27:34.320801    1145 scope.go:110] "RemoveContainer" containerID="bb2c95a659f4962fb48aa9a6551cbd79a92acd4ebd192c42e969cd3a9938ff5e"
	Apr 14 13:27:34 test-preload-986059 kubelet[1145]: E0414 13:27:34.321050    1145 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f8ad91bb-f016-4cb5-8493-a0ea8b1738fb)\"" pod="kube-system/storage-provisioner" podUID=f8ad91bb-f016-4cb5-8493-a0ea8b1738fb
	Apr 14 13:27:35 test-preload-986059 kubelet[1145]: E0414 13:27:35.149673    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 13:27:35 test-preload-986059 kubelet[1145]: E0414 13:27:35.149791    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd-config-volume podName:bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd nodeName:}" failed. No retries permitted until 2025-04-14 13:27:39.149764458 +0000 UTC m=+13.159250022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd-config-volume") pod "coredns-6d4b75cb6d-kwt6l" (UID: "bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd") : object "kube-system"/"coredns" not registered
	Apr 14 13:27:35 test-preload-986059 kubelet[1145]: E0414 13:27:35.257205    1145 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-kwt6l" podUID=bcf5c38e-4a9a-4368-872b-bf0b9ddab9bd
	Apr 14 13:27:45 test-preload-986059 kubelet[1145]: I0414 13:27:45.257205    1145 scope.go:110] "RemoveContainer" containerID="bb2c95a659f4962fb48aa9a6551cbd79a92acd4ebd192c42e969cd3a9938ff5e"
	
	
	==> storage-provisioner [670f3a17765b7df5cd9b17bcaf2f6aca83959fcaa184faba20885dd4d96d7506] <==
	I0414 13:27:45.384438       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 13:27:45.402276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 13:27:45.403144       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [bb2c95a659f4962fb48aa9a6551cbd79a92acd4ebd192c42e969cd3a9938ff5e] <==
	I0414 13:27:32.469873       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0414 13:27:32.474050       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-986059 -n test-preload-986059
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-986059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-986059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-986059
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-986059: (1.234994311s)
--- FAIL: TestPreload (174.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (395.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225418 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-225418 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m5.551134877s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-225418] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-225418" primary control-plane node in "kubernetes-upgrade-225418" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:30:48.353809 1212565 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:30:48.353951 1212565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:30:48.353959 1212565 out.go:358] Setting ErrFile to fd 2...
	I0414 13:30:48.353966 1212565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:30:48.354219 1212565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:30:48.354970 1212565 out.go:352] Setting JSON to false
	I0414 13:30:48.356256 1212565 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18795,"bootTime":1744618653,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:30:48.356331 1212565 start.go:139] virtualization: kvm guest
	I0414 13:30:48.358559 1212565 out.go:177] * [kubernetes-upgrade-225418] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:30:48.360945 1212565 notify.go:220] Checking for updates...
	I0414 13:30:48.364827 1212565 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 13:30:48.366957 1212565 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:30:48.369063 1212565 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:30:48.371716 1212565 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:30:48.373487 1212565 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:30:48.375259 1212565 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:30:48.377623 1212565 config.go:182] Loaded profile config "NoKubernetes-814220": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:30:48.377811 1212565 config.go:182] Loaded profile config "force-systemd-env-835118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:30:48.378018 1212565 config.go:182] Loaded profile config "running-upgrade-865678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0414 13:30:48.378166 1212565 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:30:48.435897 1212565 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 13:30:48.437396 1212565 start.go:297] selected driver: kvm2
	I0414 13:30:48.437427 1212565 start.go:901] validating driver "kvm2" against <nil>
	I0414 13:30:48.437447 1212565 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:30:48.438829 1212565 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:30:48.438965 1212565 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20384-1167927/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 13:30:48.464067 1212565 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 13:30:48.464140 1212565 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 13:30:48.464510 1212565 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 13:30:48.464558 1212565 cni.go:84] Creating CNI manager for ""
	I0414 13:30:48.464611 1212565 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:30:48.464625 1212565 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 13:30:48.464700 1212565 start.go:340] cluster config:
	{Name:kubernetes-upgrade-225418 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-225418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:30:48.464828 1212565 iso.go:125] acquiring lock: {Name:mk8f809cb461c81c5ffa481f4cedc4ad92252720 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:30:48.466991 1212565 out.go:177] * Starting "kubernetes-upgrade-225418" primary control-plane node in "kubernetes-upgrade-225418" cluster
	I0414 13:30:48.468687 1212565 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 13:30:48.468779 1212565 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 13:30:48.468792 1212565 cache.go:56] Caching tarball of preloaded images
	I0414 13:30:48.468955 1212565 preload.go:172] Found /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 13:30:48.469001 1212565 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 13:30:48.469202 1212565 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/config.json ...
	I0414 13:30:48.469250 1212565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/config.json: {Name:mkfee58eb055cbaa740c1cea5e64e5d20a7a7be7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:30:48.469521 1212565 start.go:360] acquireMachinesLock for kubernetes-upgrade-225418: {Name:mk1c744e856a4885e6214d755048926590b4b12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 13:31:22.280321 1212565 start.go:364] duration metric: took 33.810759144s to acquireMachinesLock for "kubernetes-upgrade-225418"
	I0414 13:31:22.280402 1212565 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-225418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-225418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:31:22.280580 1212565 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 13:31:22.282308 1212565 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 13:31:22.282538 1212565 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:31:22.282593 1212565 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:31:22.306199 1212565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42629
	I0414 13:31:22.306790 1212565 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:31:22.307506 1212565 main.go:141] libmachine: Using API Version  1
	I0414 13:31:22.307545 1212565 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:31:22.308121 1212565 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:31:22.308411 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetMachineName
	I0414 13:31:22.308621 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:31:22.308845 1212565 start.go:159] libmachine.API.Create for "kubernetes-upgrade-225418" (driver="kvm2")
	I0414 13:31:22.308876 1212565 client.go:168] LocalClient.Create starting
	I0414 13:31:22.308928 1212565 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem
	I0414 13:31:22.309019 1212565 main.go:141] libmachine: Decoding PEM data...
	I0414 13:31:22.309054 1212565 main.go:141] libmachine: Parsing certificate...
	I0414 13:31:22.309162 1212565 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem
	I0414 13:31:22.309215 1212565 main.go:141] libmachine: Decoding PEM data...
	I0414 13:31:22.309228 1212565 main.go:141] libmachine: Parsing certificate...
	I0414 13:31:22.309252 1212565 main.go:141] libmachine: Running pre-create checks...
	I0414 13:31:22.309268 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .PreCreateCheck
	I0414 13:31:22.309708 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetConfigRaw
	I0414 13:31:22.310280 1212565 main.go:141] libmachine: Creating machine...
	I0414 13:31:22.310296 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .Create
	I0414 13:31:22.310539 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) creating KVM machine...
	I0414 13:31:22.310563 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) creating network...
	I0414 13:31:22.312340 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found existing default KVM network
	I0414 13:31:22.313489 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:22.313242 1213181 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1e:e0:6c} reservation:<nil>}
	I0414 13:31:22.314572 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:22.314449 1213181 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001174c0}
	I0414 13:31:22.314604 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | created network xml: 
	I0414 13:31:22.314613 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | <network>
	I0414 13:31:22.314624 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG |   <name>mk-kubernetes-upgrade-225418</name>
	I0414 13:31:22.314635 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG |   <dns enable='no'/>
	I0414 13:31:22.314644 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG |   
	I0414 13:31:22.314654 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0414 13:31:22.314677 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG |     <dhcp>
	I0414 13:31:22.314690 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0414 13:31:22.314702 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG |     </dhcp>
	I0414 13:31:22.314713 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG |   </ip>
	I0414 13:31:22.314721 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG |   
	I0414 13:31:22.314735 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | </network>
	I0414 13:31:22.314744 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | 
	I0414 13:31:22.321167 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | trying to create private KVM network mk-kubernetes-upgrade-225418 192.168.50.0/24...
	I0414 13:31:22.427325 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | private KVM network mk-kubernetes-upgrade-225418 192.168.50.0/24 created
	I0414 13:31:22.427368 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) setting up store path in /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418 ...
	I0414 13:31:22.427381 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:22.427288 1213181 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:31:22.427402 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) building disk image from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 13:31:22.427549 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Downloading /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 13:31:22.733970 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:22.733779 1213181 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/id_rsa...
	I0414 13:31:22.882161 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:22.881980 1213181 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/kubernetes-upgrade-225418.rawdisk...
	I0414 13:31:22.882209 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | Writing magic tar header
	I0414 13:31:22.882228 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | Writing SSH key tar header
	I0414 13:31:22.882240 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:22.882110 1213181 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418 ...
	I0414 13:31:22.882256 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418
	I0414 13:31:22.882411 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418 (perms=drwx------)
	I0414 13:31:22.882447 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines
	I0414 13:31:22.882458 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:31:22.882498 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines (perms=drwxr-xr-x)
	I0414 13:31:22.882522 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube (perms=drwxr-xr-x)
	I0414 13:31:22.882536 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927 (perms=drwxrwxr-x)
	I0414 13:31:22.882556 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 13:31:22.882569 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927
	I0414 13:31:22.882581 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 13:31:22.882594 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) creating domain...
	I0414 13:31:22.882610 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 13:31:22.882620 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | checking permissions on dir: /home/jenkins
	I0414 13:31:22.882632 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | checking permissions on dir: /home
	I0414 13:31:22.882643 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | skipping /home - not owner
	I0414 13:31:22.883882 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) define libvirt domain using xml: 
	I0414 13:31:22.883912 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) <domain type='kvm'>
	I0414 13:31:22.883924 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   <name>kubernetes-upgrade-225418</name>
	I0414 13:31:22.883933 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   <memory unit='MiB'>2200</memory>
	I0414 13:31:22.883942 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   <vcpu>2</vcpu>
	I0414 13:31:22.883949 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   <features>
	I0414 13:31:22.883958 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <acpi/>
	I0414 13:31:22.883972 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <apic/>
	I0414 13:31:22.883985 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <pae/>
	I0414 13:31:22.883994 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     
	I0414 13:31:22.884006 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   </features>
	I0414 13:31:22.884014 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   <cpu mode='host-passthrough'>
	I0414 13:31:22.884025 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   
	I0414 13:31:22.884038 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   </cpu>
	I0414 13:31:22.884050 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   <os>
	I0414 13:31:22.884061 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <type>hvm</type>
	I0414 13:31:22.884069 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <boot dev='cdrom'/>
	I0414 13:31:22.884080 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <boot dev='hd'/>
	I0414 13:31:22.884089 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <bootmenu enable='no'/>
	I0414 13:31:22.884099 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   </os>
	I0414 13:31:22.884138 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   <devices>
	I0414 13:31:22.884166 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <disk type='file' device='cdrom'>
	I0414 13:31:22.884184 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/boot2docker.iso'/>
	I0414 13:31:22.884203 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <target dev='hdc' bus='scsi'/>
	I0414 13:31:22.884228 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <readonly/>
	I0414 13:31:22.884239 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     </disk>
	I0414 13:31:22.884253 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <disk type='file' device='disk'>
	I0414 13:31:22.884267 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 13:31:22.884286 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/kubernetes-upgrade-225418.rawdisk'/>
	I0414 13:31:22.884297 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <target dev='hda' bus='virtio'/>
	I0414 13:31:22.884307 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     </disk>
	I0414 13:31:22.884394 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <interface type='network'>
	I0414 13:31:22.884490 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <source network='mk-kubernetes-upgrade-225418'/>
	I0414 13:31:22.884526 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <model type='virtio'/>
	I0414 13:31:22.884538 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     </interface>
	I0414 13:31:22.884551 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <interface type='network'>
	I0414 13:31:22.884561 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <source network='default'/>
	I0414 13:31:22.884568 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <model type='virtio'/>
	I0414 13:31:22.884578 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     </interface>
	I0414 13:31:22.884585 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <serial type='pty'>
	I0414 13:31:22.884593 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <target port='0'/>
	I0414 13:31:22.884598 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     </serial>
	I0414 13:31:22.884607 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <console type='pty'>
	I0414 13:31:22.884614 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <target type='serial' port='0'/>
	I0414 13:31:22.884623 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     </console>
	I0414 13:31:22.884629 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     <rng model='virtio'>
	I0414 13:31:22.884638 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)       <backend model='random'>/dev/random</backend>
	I0414 13:31:22.884649 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     </rng>
	I0414 13:31:22.884684 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     
	I0414 13:31:22.884701 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)     
	I0414 13:31:22.884736 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418)   </devices>
	I0414 13:31:22.884813 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) </domain>
	I0414 13:31:22.884838 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) 
	I0414 13:31:22.888966 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:aa:3c:e2 in network default
	I0414 13:31:22.889754 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) starting domain...
	I0414 13:31:22.889783 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) ensuring networks are active...
	I0414 13:31:22.889804 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:22.890607 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Ensuring network default is active
	I0414 13:31:22.890987 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Ensuring network mk-kubernetes-upgrade-225418 is active
	I0414 13:31:22.891808 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) getting domain XML...
	I0414 13:31:22.892751 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) creating domain...
	I0414 13:31:24.253716 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) waiting for IP...
	I0414 13:31:24.254568 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:24.255186 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:24.255230 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:24.255162 1213181 retry.go:31] will retry after 225.659603ms: waiting for domain to come up
	I0414 13:31:24.482813 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:24.483512 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:24.483548 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:24.483431 1213181 retry.go:31] will retry after 279.192254ms: waiting for domain to come up
	I0414 13:31:24.764110 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:24.764714 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:24.764746 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:24.764677 1213181 retry.go:31] will retry after 383.521522ms: waiting for domain to come up
	I0414 13:31:25.150442 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:25.150946 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:25.150977 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:25.150925 1213181 retry.go:31] will retry after 379.919824ms: waiting for domain to come up
	I0414 13:31:25.532659 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:25.533190 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:25.533238 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:25.533159 1213181 retry.go:31] will retry after 615.974562ms: waiting for domain to come up
	I0414 13:31:26.151094 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:26.151742 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:26.151774 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:26.151694 1213181 retry.go:31] will retry after 926.009593ms: waiting for domain to come up
	I0414 13:31:27.080005 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:27.080778 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:27.080814 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:27.080719 1213181 retry.go:31] will retry after 1.130689417s: waiting for domain to come up
	I0414 13:31:28.212859 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:28.213400 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:28.213438 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:28.213378 1213181 retry.go:31] will retry after 1.254102015s: waiting for domain to come up
	I0414 13:31:29.470244 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:29.470905 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:29.470941 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:29.470850 1213181 retry.go:31] will retry after 1.734273045s: waiting for domain to come up
	I0414 13:31:31.207549 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:31.208027 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:31.208087 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:31.208014 1213181 retry.go:31] will retry after 2.176181855s: waiting for domain to come up
	I0414 13:31:33.386250 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:33.386954 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:33.387003 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:33.386888 1213181 retry.go:31] will retry after 2.720422198s: waiting for domain to come up
	I0414 13:31:36.111017 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:36.111545 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:36.111570 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:36.111498 1213181 retry.go:31] will retry after 3.290812232s: waiting for domain to come up
	I0414 13:31:39.404928 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:39.405670 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find current IP address of domain kubernetes-upgrade-225418 in network mk-kubernetes-upgrade-225418
	I0414 13:31:39.405734 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | I0414 13:31:39.405627 1213181 retry.go:31] will retry after 3.626521447s: waiting for domain to come up
	I0414 13:31:43.033682 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.034274 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has current primary IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.034300 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) found domain IP: 192.168.50.229
	I0414 13:31:43.034314 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) reserving static IP address...
	I0414 13:31:43.034714 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-225418", mac: "52:54:00:48:28:d1", ip: "192.168.50.229"} in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.152716 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | Getting to WaitForSSH function...
	I0414 13:31:43.152748 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) reserved static IP address 192.168.50.229 for domain kubernetes-upgrade-225418
	I0414 13:31:43.152762 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) waiting for SSH...
	I0414 13:31:43.155897 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.156351 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:minikube Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:43.156382 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.156620 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | Using SSH client type: external
	I0414 13:31:43.156652 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | Using SSH private key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/id_rsa (-rw-------)
	I0414 13:31:43.156684 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 13:31:43.156700 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | About to run SSH command:
	I0414 13:31:43.156734 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | exit 0
	I0414 13:31:43.288004 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | SSH cmd err, output: <nil>: 
	I0414 13:31:43.288356 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) KVM machine creation complete
	I0414 13:31:43.288680 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetConfigRaw
	I0414 13:31:43.289325 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:31:43.289529 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:31:43.289691 1212565 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 13:31:43.289707 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetState
	I0414 13:31:43.291159 1212565 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 13:31:43.291177 1212565 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 13:31:43.291184 1212565 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 13:31:43.291192 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:31:43.294236 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.294789 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:43.294826 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.295109 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:31:43.295354 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:43.295642 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:43.295913 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:31:43.296140 1212565 main.go:141] libmachine: Using SSH client type: native
	I0414 13:31:43.296449 1212565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0414 13:31:43.296464 1212565 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 13:31:43.403932 1212565 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:31:43.403967 1212565 main.go:141] libmachine: Detecting the provisioner...
	I0414 13:31:43.403980 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:31:43.407633 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.408368 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:43.408411 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.408582 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:31:43.408913 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:43.409197 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:43.409442 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:31:43.409650 1212565 main.go:141] libmachine: Using SSH client type: native
	I0414 13:31:43.409895 1212565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0414 13:31:43.409907 1212565 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 13:31:43.517659 1212565 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 13:31:43.517746 1212565 main.go:141] libmachine: found compatible host: buildroot
	I0414 13:31:43.517759 1212565 main.go:141] libmachine: Provisioning with buildroot...
	I0414 13:31:43.517772 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetMachineName
	I0414 13:31:43.518106 1212565 buildroot.go:166] provisioning hostname "kubernetes-upgrade-225418"
	I0414 13:31:43.518130 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetMachineName
	I0414 13:31:43.518347 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:31:43.521671 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.522212 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:43.522251 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.522466 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:31:43.522796 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:43.523109 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:43.523367 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:31:43.523576 1212565 main.go:141] libmachine: Using SSH client type: native
	I0414 13:31:43.523892 1212565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0414 13:31:43.523917 1212565 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-225418 && echo "kubernetes-upgrade-225418" | sudo tee /etc/hostname
	I0414 13:31:43.648800 1212565 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-225418
	
	I0414 13:31:43.648862 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:31:43.653030 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.653517 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:43.653560 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.653778 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:31:43.654040 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:43.654453 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:43.654669 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:31:43.654897 1212565 main.go:141] libmachine: Using SSH client type: native
	I0414 13:31:43.655182 1212565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0414 13:31:43.655205 1212565 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-225418' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-225418/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-225418' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:31:43.782852 1212565 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:31:43.782896 1212565 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20384-1167927/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-1167927/.minikube}
	I0414 13:31:43.782933 1212565 buildroot.go:174] setting up certificates
	I0414 13:31:43.782950 1212565 provision.go:84] configureAuth start
	I0414 13:31:43.782967 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetMachineName
	I0414 13:31:43.783354 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetIP
	I0414 13:31:43.786715 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.787496 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:43.787539 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.787814 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:31:43.790590 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.791188 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:43.791232 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:43.791436 1212565 provision.go:143] copyHostCerts
	I0414 13:31:43.791502 1212565 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem, removing ...
	I0414 13:31:43.791526 1212565 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem
	I0414 13:31:43.791589 1212565 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem (1123 bytes)
	I0414 13:31:43.791746 1212565 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem, removing ...
	I0414 13:31:43.791760 1212565 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem
	I0414 13:31:43.791800 1212565 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem (1675 bytes)
	I0414 13:31:43.791891 1212565 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem, removing ...
	I0414 13:31:43.791903 1212565 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem
	I0414 13:31:43.791931 1212565 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem (1078 bytes)
	I0414 13:31:43.791995 1212565 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-225418 san=[127.0.0.1 192.168.50.229 kubernetes-upgrade-225418 localhost minikube]
	I0414 13:31:44.138276 1212565 provision.go:177] copyRemoteCerts
	I0414 13:31:44.138362 1212565 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:31:44.138393 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:31:44.141393 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.141900 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:44.141941 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.142275 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:31:44.142577 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:44.142823 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:31:44.143050 1212565 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/id_rsa Username:docker}
	I0414 13:31:44.230634 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 13:31:44.257853 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:31:44.283683 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 13:31:44.310975 1212565 provision.go:87] duration metric: took 528.003666ms to configureAuth
	I0414 13:31:44.311015 1212565 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:31:44.311233 1212565 config.go:182] Loaded profile config "kubernetes-upgrade-225418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 13:31:44.311347 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:31:44.314802 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.315253 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:44.315294 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.315514 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:31:44.315797 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:44.316008 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:44.316205 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:31:44.316416 1212565 main.go:141] libmachine: Using SSH client type: native
	I0414 13:31:44.316628 1212565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0414 13:31:44.316645 1212565 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:31:44.556065 1212565 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:31:44.556101 1212565 main.go:141] libmachine: Checking connection to Docker...
	I0414 13:31:44.556112 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetURL
	I0414 13:31:44.557898 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | using libvirt version 6000000
	I0414 13:31:44.560669 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.561076 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:44.561113 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.561410 1212565 main.go:141] libmachine: Docker is up and running!
	I0414 13:31:44.561430 1212565 main.go:141] libmachine: Reticulating splines...
	I0414 13:31:44.561439 1212565 client.go:171] duration metric: took 22.252550727s to LocalClient.Create
	I0414 13:31:44.561470 1212565 start.go:167] duration metric: took 22.252626982s to libmachine.API.Create "kubernetes-upgrade-225418"
	I0414 13:31:44.561485 1212565 start.go:293] postStartSetup for "kubernetes-upgrade-225418" (driver="kvm2")
	I0414 13:31:44.561495 1212565 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:31:44.561512 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:31:44.561789 1212565 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:31:44.561840 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:31:44.565155 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.565577 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:44.565635 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.565828 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:31:44.566199 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:44.566492 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:31:44.566711 1212565 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/id_rsa Username:docker}
	I0414 13:31:44.653601 1212565 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:31:44.659337 1212565 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:31:44.659380 1212565 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/addons for local assets ...
	I0414 13:31:44.659534 1212565 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/files for local assets ...
	I0414 13:31:44.659753 1212565 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem -> 11757462.pem in /etc/ssl/certs
	I0414 13:31:44.659948 1212565 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:31:44.671539 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:31:44.707790 1212565 start.go:296] duration metric: took 146.287264ms for postStartSetup
	I0414 13:31:44.707867 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetConfigRaw
	I0414 13:31:44.708592 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetIP
	I0414 13:31:44.712491 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.713154 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:44.713189 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.713439 1212565 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/config.json ...
	I0414 13:31:44.713798 1212565 start.go:128] duration metric: took 22.433189076s to createHost
	I0414 13:31:44.713838 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:31:44.717654 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.718081 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:44.718116 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.718496 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:31:44.718802 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:44.719009 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:44.719181 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:31:44.719394 1212565 main.go:141] libmachine: Using SSH client type: native
	I0414 13:31:44.719716 1212565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0414 13:31:44.719743 1212565 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:31:44.829032 1212565 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744637504.773602858
	
	I0414 13:31:44.829069 1212565 fix.go:216] guest clock: 1744637504.773602858
	I0414 13:31:44.829082 1212565 fix.go:229] Guest: 2025-04-14 13:31:44.773602858 +0000 UTC Remote: 2025-04-14 13:31:44.713821676 +0000 UTC m=+56.407479174 (delta=59.781182ms)
	I0414 13:31:44.829114 1212565 fix.go:200] guest clock delta is within tolerance: 59.781182ms
	I0414 13:31:44.829121 1212565 start.go:83] releasing machines lock for "kubernetes-upgrade-225418", held for 22.548759043s
	I0414 13:31:44.829159 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:31:44.829509 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetIP
	I0414 13:31:44.832664 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.833274 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:44.833310 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.833613 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:31:44.834426 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:31:44.834808 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:31:44.834942 1212565 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:31:44.835023 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:31:44.835170 1212565 ssh_runner.go:195] Run: cat /version.json
	I0414 13:31:44.835205 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:31:44.838720 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.839002 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.839047 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:44.839071 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.839372 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:31:44.839564 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:44.839609 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:44.839627 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:44.839844 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:31:44.839859 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:31:44.840053 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:31:44.840072 1212565 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/id_rsa Username:docker}
	I0414 13:31:44.840197 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:31:44.840337 1212565 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/id_rsa Username:docker}
	I0414 13:31:44.948297 1212565 ssh_runner.go:195] Run: systemctl --version
	I0414 13:31:44.954860 1212565 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:31:45.159346 1212565 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:31:45.167887 1212565 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:31:45.167986 1212565 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:31:45.193692 1212565 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 13:31:45.193727 1212565 start.go:495] detecting cgroup driver to use...
	I0414 13:31:45.193818 1212565 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:31:45.220911 1212565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:31:45.240863 1212565 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:31:45.240932 1212565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:31:45.258797 1212565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:31:45.281182 1212565 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:31:45.430125 1212565 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:31:45.613113 1212565 docker.go:233] disabling docker service ...
	I0414 13:31:45.613187 1212565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:31:45.630024 1212565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:31:45.646376 1212565 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:31:45.794576 1212565 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:31:45.927758 1212565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:31:45.943471 1212565 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:31:45.966590 1212565 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 13:31:45.966663 1212565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:31:45.978868 1212565 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:31:45.978953 1212565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:31:45.991368 1212565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:31:46.005913 1212565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:31:46.018666 1212565 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:31:46.035598 1212565 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:31:46.047628 1212565 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 13:31:46.047726 1212565 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 13:31:46.067616 1212565 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:31:46.080967 1212565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:31:46.213735 1212565 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:31:46.333486 1212565 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:31:46.333591 1212565 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:31:46.340329 1212565 start.go:563] Will wait 60s for crictl version
	I0414 13:31:46.340418 1212565 ssh_runner.go:195] Run: which crictl
	I0414 13:31:46.345193 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:31:46.400199 1212565 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:31:46.400305 1212565 ssh_runner.go:195] Run: crio --version
	I0414 13:31:46.438107 1212565 ssh_runner.go:195] Run: crio --version
	I0414 13:31:46.476690 1212565 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 13:31:46.478068 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetIP
	I0414 13:31:46.481663 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:46.482146 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:31:37 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:31:46.482195 1212565 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:31:46.482525 1212565 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 13:31:46.487619 1212565 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:31:46.503766 1212565 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-225418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-225418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:31:46.503887 1212565 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 13:31:46.503936 1212565 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:31:46.546041 1212565 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 13:31:46.546140 1212565 ssh_runner.go:195] Run: which lz4
	I0414 13:31:46.552510 1212565 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:31:46.558818 1212565 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:31:46.558867 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 13:31:48.265431 1212565 crio.go:462] duration metric: took 1.712969986s to copy over tarball
	I0414 13:31:48.265530 1212565 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:31:51.545209 1212565 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.279634677s)
	I0414 13:31:51.545256 1212565 crio.go:469] duration metric: took 3.279787799s to extract the tarball
	I0414 13:31:51.545267 1212565 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:31:51.595884 1212565 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:31:51.654211 1212565 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 13:31:51.654245 1212565 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 13:31:51.654366 1212565 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:31:51.654395 1212565 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:31:51.654422 1212565 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:31:51.654458 1212565 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:31:51.654504 1212565 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 13:31:51.654580 1212565 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 13:31:51.654357 1212565 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:31:51.654832 1212565 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:31:51.656078 1212565 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:31:51.656080 1212565 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 13:31:51.656115 1212565 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:31:51.656101 1212565 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:31:51.656142 1212565 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:31:51.656152 1212565 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 13:31:51.656326 1212565 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:31:51.656372 1212565 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:31:51.804660 1212565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 13:31:51.810449 1212565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:31:51.812049 1212565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 13:31:51.814385 1212565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:31:51.820137 1212565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:31:51.821531 1212565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:31:51.847276 1212565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 13:31:51.887682 1212565 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 13:31:51.887742 1212565 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 13:31:51.887804 1212565 ssh_runner.go:195] Run: which crictl
	I0414 13:31:51.989684 1212565 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 13:31:51.989746 1212565 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:31:51.989813 1212565 ssh_runner.go:195] Run: which crictl
	I0414 13:31:51.994199 1212565 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 13:31:51.994269 1212565 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:31:51.994340 1212565 ssh_runner.go:195] Run: which crictl
	I0414 13:31:51.997927 1212565 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 13:31:51.998004 1212565 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:31:51.998065 1212565 ssh_runner.go:195] Run: which crictl
	I0414 13:31:52.010526 1212565 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 13:31:52.010582 1212565 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 13:31:52.010623 1212565 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:31:52.010662 1212565 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 13:31:52.010681 1212565 ssh_runner.go:195] Run: which crictl
	I0414 13:31:52.010693 1212565 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 13:31:52.010586 1212565 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:31:52.010736 1212565 ssh_runner.go:195] Run: which crictl
	I0414 13:31:52.010738 1212565 ssh_runner.go:195] Run: which crictl
	I0414 13:31:52.010786 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:31:52.010810 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:31:52.010866 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:31:52.010920 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:31:52.030865 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:31:52.137463 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:31:52.137533 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:31:52.137463 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:31:52.137684 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:31:52.137699 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:31:52.137723 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:31:52.143901 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:31:52.301366 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:31:52.301448 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:31:52.301379 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:31:52.301513 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:31:52.301579 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:31:52.301649 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:31:52.301702 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:31:52.485005 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:31:52.485024 1212565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 13:31:52.485053 1212565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 13:31:52.485096 1212565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 13:31:52.485182 1212565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 13:31:52.485184 1212565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:31:52.485225 1212565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 13:31:52.529878 1212565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 13:31:52.529915 1212565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 13:31:53.311314 1212565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:31:53.457274 1212565 cache_images.go:92] duration metric: took 1.80300741s to LoadCachedImages
	W0414 13:31:53.457419 1212565 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0414 13:31:53.457450 1212565 kubeadm.go:934] updating node { 192.168.50.229 8443 v1.20.0 crio true true} ...
	I0414 13:31:53.457576 1212565 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-225418 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-225418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 13:31:53.457707 1212565 ssh_runner.go:195] Run: crio config
	I0414 13:31:53.517040 1212565 cni.go:84] Creating CNI manager for ""
	I0414 13:31:53.517075 1212565 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:31:53.517089 1212565 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:31:53.517114 1212565 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-225418 NodeName:kubernetes-upgrade-225418 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 13:31:53.517361 1212565 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-225418"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:31:53.517444 1212565 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 13:31:53.528489 1212565 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:31:53.528570 1212565 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:31:53.542026 1212565 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0414 13:31:53.564358 1212565 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:31:53.583740 1212565 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0414 13:31:53.605119 1212565 ssh_runner.go:195] Run: grep 192.168.50.229	control-plane.minikube.internal$ /etc/hosts
	I0414 13:31:53.609830 1212565 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:31:53.624324 1212565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:31:53.776911 1212565 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:31:53.800251 1212565 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418 for IP: 192.168.50.229
	I0414 13:31:53.800286 1212565 certs.go:194] generating shared ca certs ...
	I0414 13:31:53.800308 1212565 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:31:53.800614 1212565 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 13:31:53.800679 1212565 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 13:31:53.800703 1212565 certs.go:256] generating profile certs ...
	I0414 13:31:53.800872 1212565 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/client.key
	I0414 13:31:53.800920 1212565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/client.crt with IP's: []
	I0414 13:31:54.355478 1212565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/client.crt ...
	I0414 13:31:54.355517 1212565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/client.crt: {Name:mk12ff85ae04e0056c240dd8db074bd51b901c98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:31:54.355760 1212565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/client.key ...
	I0414 13:31:54.355781 1212565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/client.key: {Name:mk7c61af2288b09428f18264be30ceda8d9bea9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:31:54.355898 1212565 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.key.0dccdc27
	I0414 13:31:54.355925 1212565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.crt.0dccdc27 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.229]
	I0414 13:31:54.625586 1212565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.crt.0dccdc27 ...
	I0414 13:31:54.625635 1212565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.crt.0dccdc27: {Name:mkcda552303446de6de4e94ef23fd4e50c06958c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:31:54.625880 1212565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.key.0dccdc27 ...
	I0414 13:31:54.625903 1212565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.key.0dccdc27: {Name:mkcee3352043da67addaef180f186a55b6d284e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:31:54.626048 1212565 certs.go:381] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.crt.0dccdc27 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.crt
	I0414 13:31:54.626165 1212565 certs.go:385] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.key.0dccdc27 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.key
	I0414 13:31:54.626259 1212565 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/proxy-client.key
	I0414 13:31:54.626282 1212565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/proxy-client.crt with IP's: []
	I0414 13:31:55.039717 1212565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/proxy-client.crt ...
	I0414 13:31:55.039756 1212565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/proxy-client.crt: {Name:mkf0534a65da5be619d8106962ee35e8eb5b8279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:31:55.039969 1212565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/proxy-client.key ...
	I0414 13:31:55.039996 1212565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/proxy-client.key: {Name:mkf7fd4be60b655b5f1c9c5846356fdafe3dfff8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:31:55.040275 1212565 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem (1338 bytes)
	W0414 13:31:55.040336 1212565 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746_empty.pem, impossibly tiny 0 bytes
	I0414 13:31:55.040352 1212565 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:31:55.040386 1212565 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:31:55.040417 1212565 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:31:55.040446 1212565 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 13:31:55.040499 1212565 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:31:55.041459 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:31:55.079049 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 13:31:55.131483 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:31:55.159431 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:31:55.187074 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 13:31:55.215058 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 13:31:55.243763 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:31:55.270039 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 13:31:55.297649 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /usr/share/ca-certificates/11757462.pem (1708 bytes)
	I0414 13:31:55.330919 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:31:55.358935 1212565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem --> /usr/share/ca-certificates/1175746.pem (1338 bytes)
	I0414 13:31:55.393141 1212565 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:31:55.414909 1212565 ssh_runner.go:195] Run: openssl version
	I0414 13:31:55.421684 1212565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1175746.pem && ln -fs /usr/share/ca-certificates/1175746.pem /etc/ssl/certs/1175746.pem"
	I0414 13:31:55.439367 1212565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1175746.pem
	I0414 13:31:55.445751 1212565 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:27 /usr/share/ca-certificates/1175746.pem
	I0414 13:31:55.445827 1212565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1175746.pem
	I0414 13:31:55.452788 1212565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1175746.pem /etc/ssl/certs/51391683.0"
	I0414 13:31:55.465243 1212565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11757462.pem && ln -fs /usr/share/ca-certificates/11757462.pem /etc/ssl/certs/11757462.pem"
	I0414 13:31:55.477883 1212565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11757462.pem
	I0414 13:31:55.483457 1212565 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:27 /usr/share/ca-certificates/11757462.pem
	I0414 13:31:55.483533 1212565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11757462.pem
	I0414 13:31:55.490602 1212565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11757462.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:31:55.503096 1212565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:31:55.515899 1212565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:31:55.522655 1212565 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:31:55.522755 1212565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:31:55.531525 1212565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:31:55.548201 1212565 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:31:55.555009 1212565 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 13:31:55.555087 1212565 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-225418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-225418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:31:55.555213 1212565 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:31:55.555297 1212565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:31:55.601349 1212565 cri.go:89] found id: ""
	I0414 13:31:55.601469 1212565 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:31:55.611389 1212565 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:31:55.625923 1212565 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:31:55.641341 1212565 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:31:55.641375 1212565 kubeadm.go:157] found existing configuration files:
	
	I0414 13:31:55.641471 1212565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:31:55.653667 1212565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:31:55.653759 1212565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:31:55.664663 1212565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:31:55.675065 1212565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:31:55.675157 1212565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:31:55.686455 1212565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:31:55.697326 1212565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:31:55.697399 1212565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:31:55.708937 1212565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:31:55.720363 1212565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:31:55.720430 1212565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:31:55.732255 1212565 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:31:56.060542 1212565 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 13:31:56.060676 1212565 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:31:56.256789 1212565 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:31:56.256987 1212565 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:31:56.257192 1212565 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 13:31:56.483913 1212565 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:31:56.582871 1212565 out.go:235]   - Generating certificates and keys ...
	I0414 13:31:56.583018 1212565 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:31:56.583147 1212565 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:31:56.658871 1212565 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 13:31:56.819964 1212565 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 13:31:56.974146 1212565 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 13:31:57.160885 1212565 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 13:31:57.368259 1212565 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 13:31:57.368498 1212565 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-225418 localhost] and IPs [192.168.50.229 127.0.0.1 ::1]
	I0414 13:31:57.725707 1212565 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 13:31:57.725977 1212565 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-225418 localhost] and IPs [192.168.50.229 127.0.0.1 ::1]
	I0414 13:31:57.894935 1212565 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 13:31:58.084016 1212565 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 13:31:58.196345 1212565 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 13:31:58.196567 1212565 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:31:58.315450 1212565 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:31:58.538682 1212565 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:31:58.614042 1212565 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:31:58.929033 1212565 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:31:58.947866 1212565 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:31:58.950828 1212565 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:31:58.950935 1212565 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:31:59.132895 1212565 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:31:59.134900 1212565 out.go:235]   - Booting up control plane ...
	I0414 13:31:59.135090 1212565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:31:59.140885 1212565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:31:59.145994 1212565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:31:59.151476 1212565 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:31:59.159982 1212565 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 13:32:39.106893 1212565 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 13:32:39.107303 1212565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:32:39.107572 1212565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:32:44.107407 1212565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:32:44.107796 1212565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:32:54.107120 1212565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:32:54.107415 1212565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:33:14.107479 1212565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:33:14.107774 1212565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:33:54.108870 1212565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:33:54.109157 1212565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:33:54.109172 1212565 kubeadm.go:310] 
	I0414 13:33:54.109232 1212565 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 13:33:54.109310 1212565 kubeadm.go:310] 		timed out waiting for the condition
	I0414 13:33:54.109321 1212565 kubeadm.go:310] 
	I0414 13:33:54.109352 1212565 kubeadm.go:310] 	This error is likely caused by:
	I0414 13:33:54.109384 1212565 kubeadm.go:310] 		- The kubelet is not running
	I0414 13:33:54.109523 1212565 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 13:33:54.109546 1212565 kubeadm.go:310] 
	I0414 13:33:54.109684 1212565 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 13:33:54.109733 1212565 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 13:33:54.109774 1212565 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 13:33:54.109781 1212565 kubeadm.go:310] 
	I0414 13:33:54.109926 1212565 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 13:33:54.110032 1212565 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 13:33:54.110039 1212565 kubeadm.go:310] 
	I0414 13:33:54.110180 1212565 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 13:33:54.110313 1212565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 13:33:54.110413 1212565 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 13:33:54.110504 1212565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 13:33:54.110513 1212565 kubeadm.go:310] 
	I0414 13:33:54.112307 1212565 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:33:54.112438 1212565 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 13:33:54.112538 1212565 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 13:33:54.112697 1212565 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-225418 localhost] and IPs [192.168.50.229 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-225418 localhost] and IPs [192.168.50.229 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-225418 localhost] and IPs [192.168.50.229 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-225418 localhost] and IPs [192.168.50.229 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 13:33:54.112748 1212565 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 13:33:56.204309 1212565 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.091515089s)
	I0414 13:33:56.204525 1212565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:33:56.223574 1212565 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:33:56.235332 1212565 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:33:56.235365 1212565 kubeadm.go:157] found existing configuration files:
	
	I0414 13:33:56.235431 1212565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:33:56.246132 1212565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:33:56.246214 1212565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:33:56.256996 1212565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:33:56.267297 1212565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:33:56.267373 1212565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:33:56.277976 1212565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:33:56.289715 1212565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:33:56.289795 1212565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:33:56.301433 1212565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:33:56.315157 1212565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:33:56.315239 1212565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:33:56.329352 1212565 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:33:56.570532 1212565 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:35:52.881838 1212565 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 13:35:52.881960 1212565 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 13:35:52.883290 1212565 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 13:35:52.883394 1212565 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:35:52.883511 1212565 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:35:52.883628 1212565 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:35:52.883765 1212565 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 13:35:52.883850 1212565 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:35:52.885493 1212565 out.go:235]   - Generating certificates and keys ...
	I0414 13:35:52.885577 1212565 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:35:52.885651 1212565 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:35:52.885741 1212565 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 13:35:52.885814 1212565 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 13:35:52.885917 1212565 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 13:35:52.886001 1212565 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 13:35:52.886083 1212565 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 13:35:52.886163 1212565 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 13:35:52.886246 1212565 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 13:35:52.886372 1212565 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 13:35:52.886446 1212565 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 13:35:52.886613 1212565 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:35:52.886687 1212565 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:35:52.886747 1212565 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:35:52.886813 1212565 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:35:52.886909 1212565 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:35:52.887068 1212565 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:35:52.887265 1212565 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:35:52.887335 1212565 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:35:52.887433 1212565 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:35:52.888977 1212565 out.go:235]   - Booting up control plane ...
	I0414 13:35:52.889130 1212565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:35:52.889260 1212565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:35:52.889362 1212565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:35:52.889463 1212565 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:35:52.889648 1212565 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 13:35:52.889724 1212565 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 13:35:52.889818 1212565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:35:52.890069 1212565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:35:52.890172 1212565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:35:52.890449 1212565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:35:52.890531 1212565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:35:52.890719 1212565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:35:52.890814 1212565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:35:52.890986 1212565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:35:52.891048 1212565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:35:52.891215 1212565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:35:52.891229 1212565 kubeadm.go:310] 
	I0414 13:35:52.891292 1212565 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 13:35:52.891354 1212565 kubeadm.go:310] 		timed out waiting for the condition
	I0414 13:35:52.891363 1212565 kubeadm.go:310] 
	I0414 13:35:52.891415 1212565 kubeadm.go:310] 	This error is likely caused by:
	I0414 13:35:52.891465 1212565 kubeadm.go:310] 		- The kubelet is not running
	I0414 13:35:52.891593 1212565 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 13:35:52.891601 1212565 kubeadm.go:310] 
	I0414 13:35:52.891748 1212565 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 13:35:52.891799 1212565 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 13:35:52.891863 1212565 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 13:35:52.891881 1212565 kubeadm.go:310] 
	I0414 13:35:52.891979 1212565 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 13:35:52.892079 1212565 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 13:35:52.892090 1212565 kubeadm.go:310] 
	I0414 13:35:52.892218 1212565 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 13:35:52.892328 1212565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 13:35:52.892428 1212565 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 13:35:52.892527 1212565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 13:35:52.892557 1212565 kubeadm.go:310] 
	I0414 13:35:52.892620 1212565 kubeadm.go:394] duration metric: took 3m57.337539096s to StartCluster
	I0414 13:35:52.892682 1212565 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:35:52.892759 1212565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:35:52.936992 1212565 cri.go:89] found id: ""
	I0414 13:35:52.937030 1212565 logs.go:282] 0 containers: []
	W0414 13:35:52.937046 1212565 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:35:52.937053 1212565 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:35:52.937151 1212565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:35:52.980739 1212565 cri.go:89] found id: ""
	I0414 13:35:52.980779 1212565 logs.go:282] 0 containers: []
	W0414 13:35:52.980791 1212565 logs.go:284] No container was found matching "etcd"
	I0414 13:35:52.980799 1212565 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:35:52.980881 1212565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:35:53.029603 1212565 cri.go:89] found id: ""
	I0414 13:35:53.029645 1212565 logs.go:282] 0 containers: []
	W0414 13:35:53.029657 1212565 logs.go:284] No container was found matching "coredns"
	I0414 13:35:53.029666 1212565 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:35:53.029742 1212565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:35:53.070647 1212565 cri.go:89] found id: ""
	I0414 13:35:53.070685 1212565 logs.go:282] 0 containers: []
	W0414 13:35:53.070697 1212565 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:35:53.070707 1212565 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:35:53.070799 1212565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:35:53.112854 1212565 cri.go:89] found id: ""
	I0414 13:35:53.112894 1212565 logs.go:282] 0 containers: []
	W0414 13:35:53.112905 1212565 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:35:53.112914 1212565 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:35:53.112996 1212565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:35:53.163303 1212565 cri.go:89] found id: ""
	I0414 13:35:53.163355 1212565 logs.go:282] 0 containers: []
	W0414 13:35:53.163368 1212565 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:35:53.163377 1212565 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:35:53.163463 1212565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:35:53.211497 1212565 cri.go:89] found id: ""
	I0414 13:35:53.211537 1212565 logs.go:282] 0 containers: []
	W0414 13:35:53.211550 1212565 logs.go:284] No container was found matching "kindnet"
	I0414 13:35:53.211566 1212565 logs.go:123] Gathering logs for kubelet ...
	I0414 13:35:53.211585 1212565 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:35:53.271559 1212565 logs.go:123] Gathering logs for dmesg ...
	I0414 13:35:53.271607 1212565 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:35:53.287360 1212565 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:35:53.287406 1212565 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:35:53.619964 1212565 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:35:53.619997 1212565 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:35:53.620026 1212565 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:35:53.786199 1212565 logs.go:123] Gathering logs for container status ...
	I0414 13:35:53.786256 1212565 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 13:35:53.835437 1212565 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 13:35:53.835542 1212565 out.go:270] * 
	* 
	W0414 13:35:53.835619 1212565 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:35:53.835642 1212565 out.go:270] * 
	* 
	W0414 13:35:53.836749 1212565 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 13:35:53.840467 1212565 out.go:201] 
	W0414 13:35:53.842198 1212565 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:35:53.842268 1212565 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 13:35:53.842308 1212565 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 13:35:53.844211 1212565 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-225418 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-225418
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-225418: (1.851930281s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-225418 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-225418 status --format={{.Host}}: exit status 7 (85.635337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225418 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-225418 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.042129801s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-225418 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225418 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-225418 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (151.15672ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-225418] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-225418
	    minikube start -p kubernetes-upgrade-225418 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2254182 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-225418 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225418 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-225418 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.153075389s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-14 13:37:20.300000823 +0000 UTC m=+4714.233208001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-225418 -n kubernetes-upgrade-225418
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-225418 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-225418 logs -n 25: (1.815285439s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-734713 sudo                 | cilium-734713             | jenkins | v1.35.0 | 14 Apr 25 13:34 UTC |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-734713 sudo                 | cilium-734713             | jenkins | v1.35.0 | 14 Apr 25 13:34 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-734713 sudo                 | cilium-734713             | jenkins | v1.35.0 | 14 Apr 25 13:34 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-734713 sudo find            | cilium-734713             | jenkins | v1.35.0 | 14 Apr 25 13:34 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-734713 sudo crio            | cilium-734713             | jenkins | v1.35.0 | 14 Apr 25 13:34 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-734713                      | cilium-734713             | jenkins | v1.35.0 | 14 Apr 25 13:34 UTC | 14 Apr 25 13:34 UTC |
	| start   | -p pause-527439 --memory=2048         | pause-527439              | jenkins | v1.35.0 | 14 Apr 25 13:34 UTC | 14 Apr 25 13:35 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-902605 ssh cat     | force-systemd-flag-902605 | jenkins | v1.35.0 | 14 Apr 25 13:35 UTC | 14 Apr 25 13:35 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-902605          | force-systemd-flag-902605 | jenkins | v1.35.0 | 14 Apr 25 13:35 UTC | 14 Apr 25 13:35 UTC |
	| start   | -p cert-options-724745                | cert-options-724745       | jenkins | v1.35.0 | 14 Apr 25 13:35 UTC | 14 Apr 25 13:35 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-527439                       | pause-527439              | jenkins | v1.35.0 | 14 Apr 25 13:35 UTC | 14 Apr 25 13:36 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-225418          | kubernetes-upgrade-225418 | jenkins | v1.35.0 | 14 Apr 25 13:35 UTC | 14 Apr 25 13:35 UTC |
	| ssh     | cert-options-724745 ssh               | cert-options-724745       | jenkins | v1.35.0 | 14 Apr 25 13:35 UTC | 14 Apr 25 13:35 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-724745 -- sudo        | cert-options-724745       | jenkins | v1.35.0 | 14 Apr 25 13:35 UTC | 14 Apr 25 13:35 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-724745                | cert-options-724745       | jenkins | v1.35.0 | 14 Apr 25 13:35 UTC | 14 Apr 25 13:35 UTC |
	| start   | -p kubernetes-upgrade-225418          | kubernetes-upgrade-225418 | jenkins | v1.35.0 | 14 Apr 25 13:35 UTC | 14 Apr 25 13:36 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-966509             | old-k8s-version-966509    | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| pause   | -p pause-527439                       | pause-527439              | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:36 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-527439                       | pause-527439              | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:36 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-527439                       | pause-527439              | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:36 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-225418          | kubernetes-upgrade-225418 | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-225418          | kubernetes-upgrade-225418 | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:37 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-527439                       | pause-527439              | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:36 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-527439                       | pause-527439              | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC | 14 Apr 25 13:36 UTC |
	| start   | -p no-preload-824763                  | no-preload-824763         | jenkins | v1.35.0 | 14 Apr 25 13:36 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 13:36:48
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 13:36:48.213543 1220008 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:36:48.213852 1220008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:36:48.213870 1220008 out.go:358] Setting ErrFile to fd 2...
	I0414 13:36:48.213879 1220008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:36:48.214151 1220008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:36:48.214855 1220008 out.go:352] Setting JSON to false
	I0414 13:36:48.216231 1220008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19155,"bootTime":1744618653,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:36:48.216312 1220008 start.go:139] virtualization: kvm guest
	I0414 13:36:48.218614 1220008 out.go:177] * [no-preload-824763] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:36:48.220368 1220008 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 13:36:48.220387 1220008 notify.go:220] Checking for updates...
	I0414 13:36:48.223034 1220008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:36:48.224581 1220008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:36:48.226066 1220008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:36:48.227628 1220008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:36:48.229254 1220008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:36:48.231589 1220008 config.go:182] Loaded profile config "cert-expiration-737652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:36:48.231772 1220008 config.go:182] Loaded profile config "kubernetes-upgrade-225418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:36:48.231924 1220008 config.go:182] Loaded profile config "old-k8s-version-966509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 13:36:48.232103 1220008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:36:48.277883 1220008 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 13:36:48.279543 1220008 start.go:297] selected driver: kvm2
	I0414 13:36:48.279603 1220008 start.go:901] validating driver "kvm2" against <nil>
	I0414 13:36:48.279626 1220008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:36:48.281362 1220008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:48.281561 1220008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20384-1167927/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 13:36:48.303111 1220008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 13:36:48.303194 1220008 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 13:36:48.303600 1220008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:36:48.303730 1220008 cni.go:84] Creating CNI manager for ""
	I0414 13:36:48.303800 1220008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:36:48.303817 1220008 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 13:36:48.303901 1220008 start.go:340] cluster config:
	{Name:no-preload-824763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-824763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:36:48.304113 1220008 iso.go:125] acquiring lock: {Name:mk8f809cb461c81c5ffa481f4cedc4ad92252720 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:48.307410 1220008 out.go:177] * Starting "no-preload-824763" primary control-plane node in "no-preload-824763" cluster
	I0414 13:36:45.834632 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetIP
	I0414 13:36:45.838350 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:45.838893 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:45.838933 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:45.839180 1219144 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 13:36:45.843466 1219144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:36:45.857568 1219144 kubeadm.go:883] updating cluster {Name:old-k8s-version-966509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:36:45.857712 1219144 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 13:36:45.857774 1219144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:36:45.901114 1219144 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 13:36:45.901204 1219144 ssh_runner.go:195] Run: which lz4
	I0414 13:36:45.906068 1219144 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:36:45.911106 1219144 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:36:45.911137 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 13:36:47.705817 1219144 crio.go:462] duration metric: took 1.799793737s to copy over tarball
	I0414 13:36:47.705903 1219144 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:36:45.430479 1219745 machine.go:93] provisionDockerMachine start ...
	I0414 13:36:45.430532 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:36:45.430943 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:36:45.435323 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:45.435560 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:45.435586 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:45.436131 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:36:45.436386 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:45.436565 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:45.436808 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:36:45.437029 1219745 main.go:141] libmachine: Using SSH client type: native
	I0414 13:36:45.437290 1219745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0414 13:36:45.437305 1219745 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 13:36:45.597853 1219745 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-225418
	
	I0414 13:36:45.597903 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetMachineName
	I0414 13:36:45.598346 1219745 buildroot.go:166] provisioning hostname "kubernetes-upgrade-225418"
	I0414 13:36:45.598385 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetMachineName
	I0414 13:36:45.598657 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:36:45.603172 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:45.603739 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:45.603855 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:45.604116 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:36:45.604448 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:45.604708 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:45.604935 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:36:45.605164 1219745 main.go:141] libmachine: Using SSH client type: native
	I0414 13:36:45.605450 1219745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0414 13:36:45.605470 1219745 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-225418 && echo "kubernetes-upgrade-225418" | sudo tee /etc/hostname
	I0414 13:36:45.741886 1219745 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-225418
	
	I0414 13:36:45.741924 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:36:45.746850 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:45.747390 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:45.747433 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:45.747706 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:36:45.747977 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:45.748186 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:45.748412 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:36:45.748664 1219745 main.go:141] libmachine: Using SSH client type: native
	I0414 13:36:45.748957 1219745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0414 13:36:45.749021 1219745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-225418' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-225418/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-225418' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:36:45.869815 1219745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:36:45.869925 1219745 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20384-1167927/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-1167927/.minikube}
	I0414 13:36:45.869979 1219745 buildroot.go:174] setting up certificates
	I0414 13:36:45.870046 1219745 provision.go:84] configureAuth start
	I0414 13:36:45.870071 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetMachineName
	I0414 13:36:45.870457 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetIP
	I0414 13:36:45.874890 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:45.875408 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:45.875439 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:45.875725 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:36:45.878422 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:45.878834 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:45.878959 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:45.879079 1219745 provision.go:143] copyHostCerts
	I0414 13:36:45.879124 1219745 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem, removing ...
	I0414 13:36:45.879145 1219745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem
	I0414 13:36:45.879198 1219745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem (1078 bytes)
	I0414 13:36:45.879304 1219745 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem, removing ...
	I0414 13:36:45.879309 1219745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem
	I0414 13:36:45.879329 1219745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem (1123 bytes)
	I0414 13:36:45.879389 1219745 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem, removing ...
	I0414 13:36:45.879393 1219745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem
	I0414 13:36:45.879409 1219745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem (1675 bytes)
	I0414 13:36:45.879463 1219745 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-225418 san=[127.0.0.1 192.168.50.229 kubernetes-upgrade-225418 localhost minikube]
	I0414 13:36:46.591568 1219745 provision.go:177] copyRemoteCerts
	I0414 13:36:46.591678 1219745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:36:46.591723 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:36:46.825990 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:46.826693 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:46.826755 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:46.827000 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:36:46.827287 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:46.827485 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:36:46.827634 1219745 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/id_rsa Username:docker}
	I0414 13:36:46.928645 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:36:46.981036 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 13:36:47.016972 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 13:36:47.057638 1219745 provision.go:87] duration metric: took 1.187571801s to configureAuth
	I0414 13:36:47.057673 1219745 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:36:47.057838 1219745 config.go:182] Loaded profile config "kubernetes-upgrade-225418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:36:47.057914 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:36:47.061649 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:47.062267 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:47.062293 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:47.062723 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:36:47.062964 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:47.063185 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:47.063383 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:36:47.063595 1219745 main.go:141] libmachine: Using SSH client type: native
	I0414 13:36:47.063936 1219745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0414 13:36:47.063960 1219745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:36:48.309106 1220008 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 13:36:48.309345 1220008 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/config.json ...
	I0414 13:36:48.309399 1220008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/config.json: {Name:mk93d4e83cd5e9ced45dc3e2c81e9af28bb0c531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:48.309465 1220008 cache.go:107] acquiring lock: {Name:mkd2b2a05b59fee20a6bb2ebfee649b47943ab4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:48.309517 1220008 cache.go:107] acquiring lock: {Name:mkc485898afcf091f2573d5ca352496d67bce2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:48.309563 1220008 cache.go:107] acquiring lock: {Name:mk91964c62c4395cc41f07089e2151bfb9d4cbff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:48.309584 1220008 cache.go:115] /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0414 13:36:48.309599 1220008 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 149.189µs
	I0414 13:36:48.309625 1220008 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0414 13:36:48.309479 1220008 cache.go:107] acquiring lock: {Name:mk2fcb76624dfadf7185c4c9e65f9012bfd9fdfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:48.309639 1220008 start.go:360] acquireMachinesLock for no-preload-824763: {Name:mk1c744e856a4885e6214d755048926590b4b12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 13:36:48.309688 1220008 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.2
	I0414 13:36:48.309490 1220008 cache.go:107] acquiring lock: {Name:mk4532797d9cd111b801f62e7bc9adb85d5b8a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:48.309744 1220008 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.2
	I0414 13:36:48.309739 1220008 cache.go:107] acquiring lock: {Name:mk817a93cf9d4fa0cd35c6ff824655d515d00f06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:48.309522 1220008 cache.go:107] acquiring lock: {Name:mkda9b038e00dd6d888a219effc7048209e2c0d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:48.309836 1220008 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.2
	I0414 13:36:48.309878 1220008 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0414 13:36:48.309939 1220008 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0414 13:36:48.309696 1220008 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0414 13:36:48.309543 1220008 cache.go:107] acquiring lock: {Name:mk0ecc5286780bd40cf2a7c39d1172c2b2e6a800 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:48.310097 1220008 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 13:36:48.311714 1220008 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.2
	I0414 13:36:48.312191 1220008 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.2
	I0414 13:36:48.312227 1220008 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0414 13:36:48.312304 1220008 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0414 13:36:48.312320 1220008 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0414 13:36:48.312494 1220008 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0414 13:36:48.312552 1220008 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.2
	I0414 13:36:48.497994 1220008 cache.go:162] opening:  /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0414 13:36:48.500434 1220008 cache.go:162] opening:  /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2
	I0414 13:36:48.507976 1220008 cache.go:162] opening:  /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0414 13:36:48.523407 1220008 cache.go:162] opening:  /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2
	I0414 13:36:48.525904 1220008 cache.go:162] opening:  /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0414 13:36:48.526251 1220008 cache.go:162] opening:  /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2
	I0414 13:36:48.530383 1220008 cache.go:162] opening:  /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2
	I0414 13:36:48.593330 1220008 cache.go:157] /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0414 13:36:48.593367 1220008 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 283.84316ms
	I0414 13:36:48.593385 1220008 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0414 13:36:48.941587 1220008 cache.go:157] /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0414 13:36:48.941623 1220008 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 632.159792ms
	I0414 13:36:48.941640 1220008 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0414 13:36:49.935930 1220008 cache.go:157] /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0414 13:36:49.935978 1220008 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 1.626235185s
	I0414 13:36:49.935999 1220008 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0414 13:36:49.936185 1220008 cache.go:157] /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0414 13:36:49.936202 1220008 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 1.626659766s
	I0414 13:36:49.936213 1220008 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0414 13:36:49.994345 1220008 cache.go:157] /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0414 13:36:49.994388 1220008 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 1.684871478s
	I0414 13:36:49.994410 1220008 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0414 13:36:50.039448 1220008 cache.go:157] /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0414 13:36:50.039486 1220008 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 1.729921277s
	I0414 13:36:50.039504 1220008 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0414 13:36:50.191760 1220008 cache.go:157] /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0414 13:36:50.191791 1220008 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 1.882312976s
	I0414 13:36:50.191805 1220008 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0414 13:36:50.191826 1220008 cache.go:87] Successfully saved all images to host disk.
	I0414 13:36:50.724160 1219144 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.018229212s)
	I0414 13:36:50.724190 1219144 crio.go:469] duration metric: took 3.018337559s to extract the tarball
	I0414 13:36:50.724210 1219144 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:36:50.771017 1219144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:36:50.818714 1219144 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 13:36:50.818746 1219144 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 13:36:50.818836 1219144 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:36:50.818861 1219144 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:50.818894 1219144 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:50.818922 1219144 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 13:36:50.818931 1219144 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:50.818958 1219144 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:50.818995 1219144 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:50.819036 1219144 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 13:36:50.820619 1219144 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:50.820647 1219144 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:36:50.820689 1219144 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:50.820658 1219144 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:50.821205 1219144 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 13:36:50.821221 1219144 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 13:36:50.821232 1219144 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:50.821235 1219144 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:50.970900 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:50.985819 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:50.985967 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:50.992645 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:50.993132 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:50.997076 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 13:36:51.030948 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 13:36:51.085970 1219144 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 13:36:51.086059 1219144 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:51.086120 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.146664 1219144 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 13:36:51.146725 1219144 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:51.146793 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.147926 1219144 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 13:36:51.147981 1219144 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:51.148049 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.178493 1219144 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 13:36:51.178557 1219144 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:51.178496 1219144 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 13:36:51.178610 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.178625 1219144 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:51.178693 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.182325 1219144 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 13:36:51.182382 1219144 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 13:36:51.182433 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.186121 1219144 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 13:36:51.186179 1219144 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 13:36:51.186217 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:51.186227 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.186408 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:51.186415 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:51.190207 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:51.190305 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:51.198572 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:36:51.313867 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:51.313878 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:36:51.330571 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:51.336437 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:51.348184 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:51.348223 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:51.348282 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:36:51.508370 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:36:51.508420 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:51.508388 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:51.508482 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:51.524594 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:51.524673 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:36:51.524690 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:51.668769 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 13:36:51.668840 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:36:51.686575 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 13:36:51.686645 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 13:36:51.686688 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 13:36:51.695209 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 13:36:51.695258 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 13:36:51.716304 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 13:36:52.738660 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:36:52.883689 1219144 cache_images.go:92] duration metric: took 2.064921628s to LoadCachedImages
	W0414 13:36:52.883800 1219144 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0414 13:36:52.883818 1219144 kubeadm.go:934] updating node { 192.168.61.227 8443 v1.20.0 crio true true} ...
	I0414 13:36:52.883931 1219144 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-966509 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 13:36:52.884022 1219144 ssh_runner.go:195] Run: crio config
	I0414 13:36:52.936981 1219144 cni.go:84] Creating CNI manager for ""
	I0414 13:36:52.937016 1219144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:36:52.937032 1219144 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:36:52.937063 1219144 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-966509 NodeName:old-k8s-version-966509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 13:36:52.937242 1219144 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-966509"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:36:52.937349 1219144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 13:36:52.948058 1219144 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:36:52.948152 1219144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:36:52.958374 1219144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0414 13:36:52.977198 1219144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:36:52.996256 1219144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0414 13:36:53.019522 1219144 ssh_runner.go:195] Run: grep 192.168.61.227	control-plane.minikube.internal$ /etc/hosts
	I0414 13:36:53.024544 1219144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:36:53.039316 1219144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:36:53.161921 1219144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:36:53.180557 1219144 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509 for IP: 192.168.61.227
	I0414 13:36:53.180584 1219144 certs.go:194] generating shared ca certs ...
	I0414 13:36:53.180615 1219144 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.180797 1219144 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 13:36:53.180835 1219144 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 13:36:53.180846 1219144 certs.go:256] generating profile certs ...
	I0414 13:36:53.180903 1219144 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.key
	I0414 13:36:53.180916 1219144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.crt with IP's: []
	I0414 13:36:53.338412 1219144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.crt ...
	I0414 13:36:53.338447 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.crt: {Name:mk7b24a388c6ee9adbc0642aae7bc1daf3ab8786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.338672 1219144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.key ...
	I0414 13:36:53.338696 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.key: {Name:mk8647deffad8dec1bd3919a89d9a17086df5abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.338806 1219144 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key.c3cdf645
	I0414 13:36:53.338831 1219144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt.c3cdf645 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.227]
	I0414 13:36:53.586142 1219144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt.c3cdf645 ...
	I0414 13:36:53.586184 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt.c3cdf645: {Name:mk043f45a7c3125abf9a19446894d9548a4ae0a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.595916 1219144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key.c3cdf645 ...
	I0414 13:36:53.595965 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key.c3cdf645: {Name:mke70fbb4c1194bab5d0b89416f347c6874d9bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.596117 1219144 certs.go:381] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt.c3cdf645 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt
	I0414 13:36:53.596234 1219144 certs.go:385] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key.c3cdf645 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key
	I0414 13:36:53.596319 1219144 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.key
	I0414 13:36:53.596349 1219144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.crt with IP's: []
	I0414 13:36:53.752655 1219144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.crt ...
	I0414 13:36:53.752691 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.crt: {Name:mk598716f9ad4ef551c2a36e028320375e528cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.766007 1219144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.key ...
	I0414 13:36:53.766052 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.key: {Name:mka6774a43395a2ea38d7bfe258f08f4a4f5a394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.766358 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem (1338 bytes)
	W0414 13:36:53.766413 1219144 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746_empty.pem, impossibly tiny 0 bytes
	I0414 13:36:53.766426 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:36:53.766458 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:36:53.766487 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:36:53.766520 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 13:36:53.766585 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:36:53.767520 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:36:53.796003 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 13:36:53.824604 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:36:53.853438 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:36:53.882034 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 13:36:53.910641 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 13:36:53.940189 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:36:53.966542 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 13:36:53.994424 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:36:54.030193 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem --> /usr/share/ca-certificates/1175746.pem (1338 bytes)
	I0414 13:36:54.063017 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /usr/share/ca-certificates/11757462.pem (1708 bytes)
	I0414 13:36:54.096810 1219144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:36:54.125560 1219144 ssh_runner.go:195] Run: openssl version
	I0414 13:36:54.135129 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:36:54.149872 1219144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:36:54.154881 1219144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:36:54.154976 1219144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:36:54.161441 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:36:54.173596 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1175746.pem && ln -fs /usr/share/ca-certificates/1175746.pem /etc/ssl/certs/1175746.pem"
	I0414 13:36:54.186345 1219144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1175746.pem
	I0414 13:36:54.191959 1219144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:27 /usr/share/ca-certificates/1175746.pem
	I0414 13:36:54.192053 1219144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1175746.pem
	I0414 13:36:54.198844 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1175746.pem /etc/ssl/certs/51391683.0"
	I0414 13:36:54.213143 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11757462.pem && ln -fs /usr/share/ca-certificates/11757462.pem /etc/ssl/certs/11757462.pem"
	I0414 13:36:54.226906 1219144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11757462.pem
	I0414 13:36:54.231891 1219144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:27 /usr/share/ca-certificates/11757462.pem
	I0414 13:36:54.231974 1219144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11757462.pem
	I0414 13:36:54.238607 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11757462.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:36:54.251081 1219144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:36:54.255749 1219144 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 13:36:54.256213 1219144 kubeadm.go:392] StartCluster: {Name:old-k8s-version-966509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:36:54.258650 1219144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:36:54.258730 1219144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:36:54.297324 1219144 cri.go:89] found id: ""
	I0414 13:36:54.297445 1219144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:36:54.311781 1219144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:36:54.323135 1219144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:36:54.333661 1219144 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:36:54.333691 1219144 kubeadm.go:157] found existing configuration files:
	
	I0414 13:36:54.333740 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:36:54.343740 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:36:54.343822 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:36:54.355368 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:36:54.371034 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:36:54.371114 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:36:54.383265 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:36:54.397316 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:36:54.397382 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:36:54.408217 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:36:54.419226 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:36:54.419326 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:36:54.430574 1219144 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:36:54.575605 1219144 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 13:36:54.575700 1219144 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:36:54.730808 1219144 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:36:54.730982 1219144 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:36:54.731162 1219144 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 13:36:54.973894 1219144 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:36:54.976187 1219144 out.go:235]   - Generating certificates and keys ...
	I0414 13:36:54.976325 1219144 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:36:54.976431 1219144 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:36:55.156808 1220008 start.go:364] duration metric: took 6.847124156s to acquireMachinesLock for "no-preload-824763"
	I0414 13:36:55.156881 1220008 start.go:93] Provisioning new machine with config: &{Name:no-preload-824763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:no-preload-824763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:36:55.156999 1220008 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 13:36:54.891828 1219745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:36:54.891871 1219745 machine.go:96] duration metric: took 9.461359785s to provisionDockerMachine
	I0414 13:36:54.891887 1219745 start.go:293] postStartSetup for "kubernetes-upgrade-225418" (driver="kvm2")
	I0414 13:36:54.891902 1219745 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:36:54.891929 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:36:54.892332 1219745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:36:54.892372 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:36:54.896152 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:54.896675 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:54.896713 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:54.896894 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:36:54.897148 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:54.897402 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:36:54.897568 1219745 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/id_rsa Username:docker}
	I0414 13:36:54.986663 1219745 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:36:54.992470 1219745 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:36:54.992504 1219745 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/addons for local assets ...
	I0414 13:36:54.992583 1219745 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/files for local assets ...
	I0414 13:36:54.992684 1219745 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem -> 11757462.pem in /etc/ssl/certs
	I0414 13:36:54.992806 1219745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:36:55.004719 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:36:55.036171 1219745 start.go:296] duration metric: took 144.263755ms for postStartSetup
	I0414 13:36:55.036225 1219745 fix.go:56] duration metric: took 9.633158798s for fixHost
	I0414 13:36:55.036259 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:36:55.040201 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:55.040677 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:55.040719 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:55.041083 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:36:55.041477 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:55.041701 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:55.041938 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:36:55.042190 1219745 main.go:141] libmachine: Using SSH client type: native
	I0414 13:36:55.042507 1219745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.229 22 <nil> <nil>}
	I0414 13:36:55.042527 1219745 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:36:55.156630 1219745 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744637815.148698143
	
	I0414 13:36:55.156666 1219745 fix.go:216] guest clock: 1744637815.148698143
	I0414 13:36:55.156675 1219745 fix.go:229] Guest: 2025-04-14 13:36:55.148698143 +0000 UTC Remote: 2025-04-14 13:36:55.03623075 +0000 UTC m=+9.883260664 (delta=112.467393ms)
	I0414 13:36:55.156697 1219745 fix.go:200] guest clock delta is within tolerance: 112.467393ms
	I0414 13:36:55.156702 1219745 start.go:83] releasing machines lock for "kubernetes-upgrade-225418", held for 9.75365025s
	I0414 13:36:55.156730 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:36:55.157031 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetIP
	I0414 13:36:55.160681 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:55.161197 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:55.161251 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:55.161493 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:36:55.162284 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:36:55.162498 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .DriverName
	I0414 13:36:55.162607 1219745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:36:55.162664 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:36:55.162733 1219745 ssh_runner.go:195] Run: cat /version.json
	I0414 13:36:55.162759 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHHostname
	I0414 13:36:55.165991 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:55.166264 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:55.166562 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:55.166602 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:55.166660 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:36:55.166740 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:55.166775 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:55.166855 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:55.167017 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHPort
	I0414 13:36:55.167040 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:36:55.167167 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHKeyPath
	I0414 13:36:55.167247 1219745 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/id_rsa Username:docker}
	I0414 13:36:55.167394 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetSSHUsername
	I0414 13:36:55.167532 1219745 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/kubernetes-upgrade-225418/id_rsa Username:docker}
	I0414 13:36:55.271931 1219144 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 13:36:55.563713 1219144 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 13:36:55.700416 1219144 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 13:36:55.889666 1219144 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 13:36:56.039956 1219144 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 13:36:56.040256 1219144 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-966509] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0414 13:36:56.272184 1219144 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 13:36:56.272445 1219144 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-966509] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0414 13:36:56.474255 1219144 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 13:36:56.754040 1219144 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 13:36:56.920664 1219144 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 13:36:56.921017 1219144 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:36:57.007139 1219144 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:36:57.278871 1219144 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:36:57.528177 1219144 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:36:57.950090 1219144 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:36:57.977755 1219144 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:36:57.979471 1219144 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:36:57.979568 1219144 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:36:58.150839 1219144 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:36:55.160309 1220008 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 13:36:55.160510 1220008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:36:55.160566 1220008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:36:55.182274 1220008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43775
	I0414 13:36:55.182863 1220008 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:36:55.183551 1220008 main.go:141] libmachine: Using API Version  1
	I0414 13:36:55.183581 1220008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:36:55.184092 1220008 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:36:55.184335 1220008 main.go:141] libmachine: (no-preload-824763) Calling .GetMachineName
	I0414 13:36:55.184696 1220008 main.go:141] libmachine: (no-preload-824763) Calling .DriverName
	I0414 13:36:55.184989 1220008 start.go:159] libmachine.API.Create for "no-preload-824763" (driver="kvm2")
	I0414 13:36:55.185023 1220008 client.go:168] LocalClient.Create starting
	I0414 13:36:55.185063 1220008 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem
	I0414 13:36:55.185110 1220008 main.go:141] libmachine: Decoding PEM data...
	I0414 13:36:55.185126 1220008 main.go:141] libmachine: Parsing certificate...
	I0414 13:36:55.185202 1220008 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem
	I0414 13:36:55.185226 1220008 main.go:141] libmachine: Decoding PEM data...
	I0414 13:36:55.185237 1220008 main.go:141] libmachine: Parsing certificate...
	I0414 13:36:55.185263 1220008 main.go:141] libmachine: Running pre-create checks...
	I0414 13:36:55.185271 1220008 main.go:141] libmachine: (no-preload-824763) Calling .PreCreateCheck
	I0414 13:36:55.185750 1220008 main.go:141] libmachine: (no-preload-824763) Calling .GetConfigRaw
	I0414 13:36:55.186425 1220008 main.go:141] libmachine: Creating machine...
	I0414 13:36:55.186444 1220008 main.go:141] libmachine: (no-preload-824763) Calling .Create
	I0414 13:36:55.186693 1220008 main.go:141] libmachine: (no-preload-824763) creating KVM machine...
	I0414 13:36:55.186721 1220008 main.go:141] libmachine: (no-preload-824763) creating network...
	I0414 13:36:55.188428 1220008 main.go:141] libmachine: (no-preload-824763) DBG | found existing default KVM network
	I0414 13:36:55.190138 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:55.189864 1220066 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b3:27:af} reservation:<nil>}
	I0414 13:36:55.191143 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:55.191000 1220066 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:75:d0:0f} reservation:<nil>}
	I0414 13:36:55.192282 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:55.192141 1220066 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:41:ab:7c} reservation:<nil>}
	I0414 13:36:55.193681 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:55.193553 1220066 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003a27c0}
	I0414 13:36:55.193720 1220008 main.go:141] libmachine: (no-preload-824763) DBG | created network xml: 
	I0414 13:36:55.193732 1220008 main.go:141] libmachine: (no-preload-824763) DBG | <network>
	I0414 13:36:55.193746 1220008 main.go:141] libmachine: (no-preload-824763) DBG |   <name>mk-no-preload-824763</name>
	I0414 13:36:55.193756 1220008 main.go:141] libmachine: (no-preload-824763) DBG |   <dns enable='no'/>
	I0414 13:36:55.193765 1220008 main.go:141] libmachine: (no-preload-824763) DBG |   
	I0414 13:36:55.193782 1220008 main.go:141] libmachine: (no-preload-824763) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0414 13:36:55.193791 1220008 main.go:141] libmachine: (no-preload-824763) DBG |     <dhcp>
	I0414 13:36:55.193806 1220008 main.go:141] libmachine: (no-preload-824763) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0414 13:36:55.193816 1220008 main.go:141] libmachine: (no-preload-824763) DBG |     </dhcp>
	I0414 13:36:55.193824 1220008 main.go:141] libmachine: (no-preload-824763) DBG |   </ip>
	I0414 13:36:55.193831 1220008 main.go:141] libmachine: (no-preload-824763) DBG |   
	I0414 13:36:55.193870 1220008 main.go:141] libmachine: (no-preload-824763) DBG | </network>
	I0414 13:36:55.193891 1220008 main.go:141] libmachine: (no-preload-824763) DBG | 
	I0414 13:36:55.200957 1220008 main.go:141] libmachine: (no-preload-824763) DBG | trying to create private KVM network mk-no-preload-824763 192.168.72.0/24...
	I0414 13:36:55.305267 1220008 main.go:141] libmachine: (no-preload-824763) DBG | private KVM network mk-no-preload-824763 192.168.72.0/24 created
	I0414 13:36:55.305306 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:55.305225 1220066 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:36:55.305324 1220008 main.go:141] libmachine: (no-preload-824763) setting up store path in /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/no-preload-824763 ...
	I0414 13:36:55.305346 1220008 main.go:141] libmachine: (no-preload-824763) building disk image from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 13:36:55.305361 1220008 main.go:141] libmachine: (no-preload-824763) Downloading /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 13:36:55.590158 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:55.589948 1220066 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/no-preload-824763/id_rsa...
	I0414 13:36:55.746422 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:55.746189 1220066 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/no-preload-824763/no-preload-824763.rawdisk...
	I0414 13:36:55.746468 1220008 main.go:141] libmachine: (no-preload-824763) DBG | Writing magic tar header
	I0414 13:36:55.746490 1220008 main.go:141] libmachine: (no-preload-824763) DBG | Writing SSH key tar header
	I0414 13:36:55.746503 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:55.746342 1220066 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/no-preload-824763 ...
	I0414 13:36:55.746517 1220008 main.go:141] libmachine: (no-preload-824763) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/no-preload-824763
	I0414 13:36:55.746527 1220008 main.go:141] libmachine: (no-preload-824763) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines
	I0414 13:36:55.746539 1220008 main.go:141] libmachine: (no-preload-824763) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/no-preload-824763 (perms=drwx------)
	I0414 13:36:55.746554 1220008 main.go:141] libmachine: (no-preload-824763) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines (perms=drwxr-xr-x)
	I0414 13:36:55.746565 1220008 main.go:141] libmachine: (no-preload-824763) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube (perms=drwxr-xr-x)
	I0414 13:36:55.746576 1220008 main.go:141] libmachine: (no-preload-824763) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927 (perms=drwxrwxr-x)
	I0414 13:36:55.746584 1220008 main.go:141] libmachine: (no-preload-824763) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 13:36:55.746596 1220008 main.go:141] libmachine: (no-preload-824763) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 13:36:55.746612 1220008 main.go:141] libmachine: (no-preload-824763) creating domain...
	I0414 13:36:55.746621 1220008 main.go:141] libmachine: (no-preload-824763) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:36:55.746643 1220008 main.go:141] libmachine: (no-preload-824763) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927
	I0414 13:36:55.746652 1220008 main.go:141] libmachine: (no-preload-824763) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 13:36:55.746675 1220008 main.go:141] libmachine: (no-preload-824763) DBG | checking permissions on dir: /home/jenkins
	I0414 13:36:55.746687 1220008 main.go:141] libmachine: (no-preload-824763) DBG | checking permissions on dir: /home
	I0414 13:36:55.746693 1220008 main.go:141] libmachine: (no-preload-824763) DBG | skipping /home - not owner
	I0414 13:36:55.748339 1220008 main.go:141] libmachine: (no-preload-824763) define libvirt domain using xml: 
	I0414 13:36:55.748371 1220008 main.go:141] libmachine: (no-preload-824763) <domain type='kvm'>
	I0414 13:36:55.748381 1220008 main.go:141] libmachine: (no-preload-824763)   <name>no-preload-824763</name>
	I0414 13:36:55.748388 1220008 main.go:141] libmachine: (no-preload-824763)   <memory unit='MiB'>2200</memory>
	I0414 13:36:55.748396 1220008 main.go:141] libmachine: (no-preload-824763)   <vcpu>2</vcpu>
	I0414 13:36:55.748404 1220008 main.go:141] libmachine: (no-preload-824763)   <features>
	I0414 13:36:55.748413 1220008 main.go:141] libmachine: (no-preload-824763)     <acpi/>
	I0414 13:36:55.748421 1220008 main.go:141] libmachine: (no-preload-824763)     <apic/>
	I0414 13:36:55.748434 1220008 main.go:141] libmachine: (no-preload-824763)     <pae/>
	I0414 13:36:55.748444 1220008 main.go:141] libmachine: (no-preload-824763)     
	I0414 13:36:55.748453 1220008 main.go:141] libmachine: (no-preload-824763)   </features>
	I0414 13:36:55.748464 1220008 main.go:141] libmachine: (no-preload-824763)   <cpu mode='host-passthrough'>
	I0414 13:36:55.748473 1220008 main.go:141] libmachine: (no-preload-824763)   
	I0414 13:36:55.748488 1220008 main.go:141] libmachine: (no-preload-824763)   </cpu>
	I0414 13:36:55.748498 1220008 main.go:141] libmachine: (no-preload-824763)   <os>
	I0414 13:36:55.748509 1220008 main.go:141] libmachine: (no-preload-824763)     <type>hvm</type>
	I0414 13:36:55.748522 1220008 main.go:141] libmachine: (no-preload-824763)     <boot dev='cdrom'/>
	I0414 13:36:55.748533 1220008 main.go:141] libmachine: (no-preload-824763)     <boot dev='hd'/>
	I0414 13:36:55.748543 1220008 main.go:141] libmachine: (no-preload-824763)     <bootmenu enable='no'/>
	I0414 13:36:55.748553 1220008 main.go:141] libmachine: (no-preload-824763)   </os>
	I0414 13:36:55.748563 1220008 main.go:141] libmachine: (no-preload-824763)   <devices>
	I0414 13:36:55.748578 1220008 main.go:141] libmachine: (no-preload-824763)     <disk type='file' device='cdrom'>
	I0414 13:36:55.748595 1220008 main.go:141] libmachine: (no-preload-824763)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/no-preload-824763/boot2docker.iso'/>
	I0414 13:36:55.748617 1220008 main.go:141] libmachine: (no-preload-824763)       <target dev='hdc' bus='scsi'/>
	I0414 13:36:55.748629 1220008 main.go:141] libmachine: (no-preload-824763)       <readonly/>
	I0414 13:36:55.748640 1220008 main.go:141] libmachine: (no-preload-824763)     </disk>
	I0414 13:36:55.748653 1220008 main.go:141] libmachine: (no-preload-824763)     <disk type='file' device='disk'>
	I0414 13:36:55.748667 1220008 main.go:141] libmachine: (no-preload-824763)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 13:36:55.748686 1220008 main.go:141] libmachine: (no-preload-824763)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/no-preload-824763/no-preload-824763.rawdisk'/>
	I0414 13:36:55.748696 1220008 main.go:141] libmachine: (no-preload-824763)       <target dev='hda' bus='virtio'/>
	I0414 13:36:55.748703 1220008 main.go:141] libmachine: (no-preload-824763)     </disk>
	I0414 13:36:55.748709 1220008 main.go:141] libmachine: (no-preload-824763)     <interface type='network'>
	I0414 13:36:55.748725 1220008 main.go:141] libmachine: (no-preload-824763)       <source network='mk-no-preload-824763'/>
	I0414 13:36:55.748734 1220008 main.go:141] libmachine: (no-preload-824763)       <model type='virtio'/>
	I0414 13:36:55.748743 1220008 main.go:141] libmachine: (no-preload-824763)     </interface>
	I0414 13:36:55.748755 1220008 main.go:141] libmachine: (no-preload-824763)     <interface type='network'>
	I0414 13:36:55.748767 1220008 main.go:141] libmachine: (no-preload-824763)       <source network='default'/>
	I0414 13:36:55.748778 1220008 main.go:141] libmachine: (no-preload-824763)       <model type='virtio'/>
	I0414 13:36:55.748796 1220008 main.go:141] libmachine: (no-preload-824763)     </interface>
	I0414 13:36:55.748807 1220008 main.go:141] libmachine: (no-preload-824763)     <serial type='pty'>
	I0414 13:36:55.748818 1220008 main.go:141] libmachine: (no-preload-824763)       <target port='0'/>
	I0414 13:36:55.748828 1220008 main.go:141] libmachine: (no-preload-824763)     </serial>
	I0414 13:36:55.748838 1220008 main.go:141] libmachine: (no-preload-824763)     <console type='pty'>
	I0414 13:36:55.748849 1220008 main.go:141] libmachine: (no-preload-824763)       <target type='serial' port='0'/>
	I0414 13:36:55.748856 1220008 main.go:141] libmachine: (no-preload-824763)     </console>
	I0414 13:36:55.748863 1220008 main.go:141] libmachine: (no-preload-824763)     <rng model='virtio'>
	I0414 13:36:55.748876 1220008 main.go:141] libmachine: (no-preload-824763)       <backend model='random'>/dev/random</backend>
	I0414 13:36:55.748887 1220008 main.go:141] libmachine: (no-preload-824763)     </rng>
	I0414 13:36:55.748897 1220008 main.go:141] libmachine: (no-preload-824763)     
	I0414 13:36:55.748907 1220008 main.go:141] libmachine: (no-preload-824763)     
	I0414 13:36:55.748916 1220008 main.go:141] libmachine: (no-preload-824763)   </devices>
	I0414 13:36:55.748926 1220008 main.go:141] libmachine: (no-preload-824763) </domain>
	I0414 13:36:55.748938 1220008 main.go:141] libmachine: (no-preload-824763) 
	I0414 13:36:55.754567 1220008 main.go:141] libmachine: (no-preload-824763) DBG | domain no-preload-824763 has defined MAC address 52:54:00:66:09:b7 in network default
	I0414 13:36:55.755704 1220008 main.go:141] libmachine: (no-preload-824763) starting domain...
	I0414 13:36:55.755733 1220008 main.go:141] libmachine: (no-preload-824763) ensuring networks are active...
	I0414 13:36:55.755745 1220008 main.go:141] libmachine: (no-preload-824763) DBG | domain no-preload-824763 has defined MAC address 52:54:00:e8:bc:8a in network mk-no-preload-824763
	I0414 13:36:55.756890 1220008 main.go:141] libmachine: (no-preload-824763) Ensuring network default is active
	I0414 13:36:55.757286 1220008 main.go:141] libmachine: (no-preload-824763) Ensuring network mk-no-preload-824763 is active
	I0414 13:36:55.758076 1220008 main.go:141] libmachine: (no-preload-824763) getting domain XML...
	I0414 13:36:55.759430 1220008 main.go:141] libmachine: (no-preload-824763) creating domain...
	I0414 13:36:57.188151 1220008 main.go:141] libmachine: (no-preload-824763) waiting for IP...
	I0414 13:36:57.189366 1220008 main.go:141] libmachine: (no-preload-824763) DBG | domain no-preload-824763 has defined MAC address 52:54:00:e8:bc:8a in network mk-no-preload-824763
	I0414 13:36:57.189981 1220008 main.go:141] libmachine: (no-preload-824763) DBG | unable to find current IP address of domain no-preload-824763 in network mk-no-preload-824763
	I0414 13:36:57.190010 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:57.189964 1220066 retry.go:31] will retry after 227.857569ms: waiting for domain to come up
	I0414 13:36:57.419704 1220008 main.go:141] libmachine: (no-preload-824763) DBG | domain no-preload-824763 has defined MAC address 52:54:00:e8:bc:8a in network mk-no-preload-824763
	I0414 13:36:57.420427 1220008 main.go:141] libmachine: (no-preload-824763) DBG | unable to find current IP address of domain no-preload-824763 in network mk-no-preload-824763
	I0414 13:36:57.420461 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:57.420310 1220066 retry.go:31] will retry after 361.912659ms: waiting for domain to come up
	I0414 13:36:57.785744 1220008 main.go:141] libmachine: (no-preload-824763) DBG | domain no-preload-824763 has defined MAC address 52:54:00:e8:bc:8a in network mk-no-preload-824763
	I0414 13:36:57.786704 1220008 main.go:141] libmachine: (no-preload-824763) DBG | unable to find current IP address of domain no-preload-824763 in network mk-no-preload-824763
	I0414 13:36:57.786747 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:57.786575 1220066 retry.go:31] will retry after 471.940905ms: waiting for domain to come up
	I0414 13:36:55.276698 1219745 ssh_runner.go:195] Run: systemctl --version
	I0414 13:36:55.285626 1219745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:36:55.448834 1219745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:36:55.455764 1219745 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:36:55.455851 1219745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:36:55.465586 1219745 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0414 13:36:55.465621 1219745 start.go:495] detecting cgroup driver to use...
	I0414 13:36:55.465694 1219745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:36:55.486714 1219745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:36:55.504086 1219745 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:36:55.504167 1219745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:36:55.521643 1219745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:36:55.539259 1219745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:36:55.692373 1219745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:36:55.854685 1219745 docker.go:233] disabling docker service ...
	I0414 13:36:55.854770 1219745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:36:55.872572 1219745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:36:55.887816 1219745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:36:56.055228 1219745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:36:56.293625 1219745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:36:56.469400 1219745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:36:56.688016 1219745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 13:36:56.688110 1219745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:36:56.709794 1219745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:36:56.709890 1219745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:36:56.808948 1219745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:36:56.893860 1219745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:36:56.993271 1219745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:36:57.092076 1219745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:36:57.164622 1219745 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:36:57.272130 1219745 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:36:57.341832 1219745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:36:57.389341 1219745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:36:57.431062 1219745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:36:57.799027 1219745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:36:58.598334 1219745 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:36:58.598427 1219745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:36:58.606240 1219745 start.go:563] Will wait 60s for crictl version
	I0414 13:36:58.606323 1219745 ssh_runner.go:195] Run: which crictl
	I0414 13:36:58.612808 1219745 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:36:58.667089 1219745 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:36:58.667184 1219745 ssh_runner.go:195] Run: crio --version
	I0414 13:36:58.707495 1219745 ssh_runner.go:195] Run: crio --version
	I0414 13:36:58.749024 1219745 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 13:36:58.153461 1219144 out.go:235]   - Booting up control plane ...
	I0414 13:36:58.153624 1219144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:36:58.167983 1219144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:36:58.169501 1219144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:36:58.170951 1219144 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:36:58.177917 1219144 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 13:36:58.750668 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) Calling .GetIP
	I0414 13:36:58.754884 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:58.755721 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:28:d1", ip: ""} in network mk-kubernetes-upgrade-225418: {Iface:virbr2 ExpiryTime:2025-04-14 14:36:09 +0000 UTC Type:0 Mac:52:54:00:48:28:d1 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:kubernetes-upgrade-225418 Clientid:01:52:54:00:48:28:d1}
	I0414 13:36:58.755768 1219745 main.go:141] libmachine: (kubernetes-upgrade-225418) DBG | domain kubernetes-upgrade-225418 has defined IP address 192.168.50.229 and MAC address 52:54:00:48:28:d1 in network mk-kubernetes-upgrade-225418
	I0414 13:36:58.756211 1219745 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 13:36:58.761607 1219745 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-225418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kube
rnetes-upgrade-225418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:36:58.761781 1219745 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 13:36:58.761832 1219745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:36:58.817286 1219745 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 13:36:58.817315 1219745 crio.go:433] Images already preloaded, skipping extraction
	I0414 13:36:58.817386 1219745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:36:58.863540 1219745 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 13:36:58.863580 1219745 cache_images.go:84] Images are preloaded, skipping loading
	I0414 13:36:58.863590 1219745 kubeadm.go:934] updating node { 192.168.50.229 8443 v1.32.2 crio true true} ...
	I0414 13:36:58.863778 1219745 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-225418 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-225418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 13:36:58.863887 1219745 ssh_runner.go:195] Run: crio config
	I0414 13:36:58.918890 1219745 cni.go:84] Creating CNI manager for ""
	I0414 13:36:58.918934 1219745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:36:58.918951 1219745 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:36:58.918973 1219745 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.229 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-225418 NodeName:kubernetes-upgrade-225418 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 13:36:58.919106 1219745 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-225418"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.229"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.229"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:36:58.919173 1219745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 13:36:58.933982 1219745 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:36:58.934110 1219745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:36:58.947916 1219745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0414 13:36:58.968959 1219745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:36:58.990976 1219745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0414 13:36:59.012148 1219745 ssh_runner.go:195] Run: grep 192.168.50.229	control-plane.minikube.internal$ /etc/hosts
	I0414 13:36:59.017161 1219745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:36:59.200696 1219745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:36:59.219965 1219745 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418 for IP: 192.168.50.229
	I0414 13:36:59.219992 1219745 certs.go:194] generating shared ca certs ...
	I0414 13:36:59.220012 1219745 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:59.220204 1219745 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 13:36:59.220246 1219745 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 13:36:59.220267 1219745 certs.go:256] generating profile certs ...
	I0414 13:36:59.220380 1219745 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/client.key
	I0414 13:36:59.220506 1219745 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.key.0dccdc27
	I0414 13:36:59.220574 1219745 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/proxy-client.key
	I0414 13:36:59.220718 1219745 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem (1338 bytes)
	W0414 13:36:59.220754 1219745 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746_empty.pem, impossibly tiny 0 bytes
	I0414 13:36:59.220764 1219745 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:36:59.220797 1219745 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:36:59.220833 1219745 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:36:59.220863 1219745 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 13:36:59.220923 1219745 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:36:59.221725 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:36:59.295449 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 13:36:59.349967 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:36:59.518015 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:36:59.769565 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 13:36:59.859598 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 13:36:59.928480 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:36:59.989515 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kubernetes-upgrade-225418/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 13:37:00.031331 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:37:00.079124 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem --> /usr/share/ca-certificates/1175746.pem (1338 bytes)
	I0414 13:37:00.120210 1219745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /usr/share/ca-certificates/11757462.pem (1708 bytes)
	I0414 13:37:00.192583 1219745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:36:58.260324 1220008 main.go:141] libmachine: (no-preload-824763) DBG | domain no-preload-824763 has defined MAC address 52:54:00:e8:bc:8a in network mk-no-preload-824763
	I0414 13:36:58.260903 1220008 main.go:141] libmachine: (no-preload-824763) DBG | unable to find current IP address of domain no-preload-824763 in network mk-no-preload-824763
	I0414 13:36:58.260929 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:58.260870 1220066 retry.go:31] will retry after 393.157217ms: waiting for domain to come up
	I0414 13:36:58.655675 1220008 main.go:141] libmachine: (no-preload-824763) DBG | domain no-preload-824763 has defined MAC address 52:54:00:e8:bc:8a in network mk-no-preload-824763
	I0414 13:36:58.656332 1220008 main.go:141] libmachine: (no-preload-824763) DBG | unable to find current IP address of domain no-preload-824763 in network mk-no-preload-824763
	I0414 13:36:58.656369 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:58.656277 1220066 retry.go:31] will retry after 734.524104ms: waiting for domain to come up
	I0414 13:36:59.392684 1220008 main.go:141] libmachine: (no-preload-824763) DBG | domain no-preload-824763 has defined MAC address 52:54:00:e8:bc:8a in network mk-no-preload-824763
	I0414 13:36:59.393443 1220008 main.go:141] libmachine: (no-preload-824763) DBG | unable to find current IP address of domain no-preload-824763 in network mk-no-preload-824763
	I0414 13:36:59.393480 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:36:59.393340 1220066 retry.go:31] will retry after 816.12329ms: waiting for domain to come up
	I0414 13:37:00.211993 1220008 main.go:141] libmachine: (no-preload-824763) DBG | domain no-preload-824763 has defined MAC address 52:54:00:e8:bc:8a in network mk-no-preload-824763
	I0414 13:37:00.212727 1220008 main.go:141] libmachine: (no-preload-824763) DBG | unable to find current IP address of domain no-preload-824763 in network mk-no-preload-824763
	I0414 13:37:00.212762 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:37:00.212702 1220066 retry.go:31] will retry after 1.06101907s: waiting for domain to come up
	I0414 13:37:01.276421 1220008 main.go:141] libmachine: (no-preload-824763) DBG | domain no-preload-824763 has defined MAC address 52:54:00:e8:bc:8a in network mk-no-preload-824763
	I0414 13:37:01.277105 1220008 main.go:141] libmachine: (no-preload-824763) DBG | unable to find current IP address of domain no-preload-824763 in network mk-no-preload-824763
	I0414 13:37:01.277133 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:37:01.277060 1220066 retry.go:31] will retry after 923.464303ms: waiting for domain to come up
	I0414 13:37:02.202290 1220008 main.go:141] libmachine: (no-preload-824763) DBG | domain no-preload-824763 has defined MAC address 52:54:00:e8:bc:8a in network mk-no-preload-824763
	I0414 13:37:02.202980 1220008 main.go:141] libmachine: (no-preload-824763) DBG | unable to find current IP address of domain no-preload-824763 in network mk-no-preload-824763
	I0414 13:37:02.203050 1220008 main.go:141] libmachine: (no-preload-824763) DBG | I0414 13:37:02.202955 1220066 retry.go:31] will retry after 1.667225481s: waiting for domain to come up
	I0414 13:37:00.278492 1219745 ssh_runner.go:195] Run: openssl version
	I0414 13:37:00.296602 1219745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:37:00.329156 1219745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:37:00.338504 1219745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:37:00.338590 1219745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:37:00.352449 1219745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:37:00.381604 1219745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1175746.pem && ln -fs /usr/share/ca-certificates/1175746.pem /etc/ssl/certs/1175746.pem"
	I0414 13:37:00.479218 1219745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1175746.pem
	I0414 13:37:00.499675 1219745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:27 /usr/share/ca-certificates/1175746.pem
	I0414 13:37:00.499767 1219745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1175746.pem
	I0414 13:37:00.513303 1219745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1175746.pem /etc/ssl/certs/51391683.0"
	I0414 13:37:00.531781 1219745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11757462.pem && ln -fs /usr/share/ca-certificates/11757462.pem /etc/ssl/certs/11757462.pem"
	I0414 13:37:00.550783 1219745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11757462.pem
	I0414 13:37:00.556570 1219745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:27 /usr/share/ca-certificates/11757462.pem
	I0414 13:37:00.556664 1219745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11757462.pem
	I0414 13:37:00.563986 1219745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11757462.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:37:00.576304 1219745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:37:00.581617 1219745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 13:37:00.588612 1219745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 13:37:00.595063 1219745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 13:37:00.606507 1219745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 13:37:00.613535 1219745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 13:37:00.620410 1219745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 13:37:00.628036 1219745 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-225418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kuberne
tes-upgrade-225418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:37:00.628133 1219745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:37:00.628188 1219745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:37:00.668483 1219745 cri.go:89] found id: "9678ee880d011246e9a33b1cccf198aa29b3619b81bff0eb3bcc3f85c9c1f51c"
	I0414 13:37:00.668524 1219745 cri.go:89] found id: "7ea67a9e77e1bd00722166681f12c01aa13cacb67de3ad00eafab141c1f925a6"
	I0414 13:37:00.668530 1219745 cri.go:89] found id: "1332b1cb019a4cd3a5cb7f4f6fd733de31cdfa5c3942d329c2d6906b24a062d5"
	I0414 13:37:00.668537 1219745 cri.go:89] found id: "db28659e73687d9a5b0ae8539a1f0d7789f845a427238d5d79cca3124b799028"
	I0414 13:37:00.668542 1219745 cri.go:89] found id: "c76c8639c2b608aa98bd952f463586228782756aac8522368c4ff6bd815de2a9"
	I0414 13:37:00.668550 1219745 cri.go:89] found id: "a69a2e13093a15a319271a216702740d1b151be15bf08b759024600a99eaf546"
	I0414 13:37:00.668554 1219745 cri.go:89] found id: "95c6daabeef2d5fc9648258c2f71d612e2aab9dde358fe8e8dd85d644e3601a4"
	I0414 13:37:00.668559 1219745 cri.go:89] found id: "0aa94cd64f9c63350e1da0359a53595dc6d9f686b941162f6a13318cc2729544"
	I0414 13:37:00.668563 1219745 cri.go:89] found id: "bb70bf5aad604a5cbff95b518a9b79c18918fad9cbae09b5d1ba153da1e6dd03"
	I0414 13:37:00.668572 1219745 cri.go:89] found id: "41a3db1db9eec7cd9fdd4af49a9e9ef40aca77b068459f415c7c4bb91faded4b"
	I0414 13:37:00.668577 1219745 cri.go:89] found id: ""
	I0414 13:37:00.668633 1219745 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-225418 -n kubernetes-upgrade-225418
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-225418 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-225418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-225418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-225418: (1.19354569s)
--- FAIL: TestKubernetesUpgrade (395.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (293.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-966509 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-966509 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m52.69704722s)

                                                
                                                
-- stdout --
	* [old-k8s-version-966509] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-966509" primary control-plane node in "old-k8s-version-966509" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:36:00.139254 1219144 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:36:00.139584 1219144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:36:00.139597 1219144 out.go:358] Setting ErrFile to fd 2...
	I0414 13:36:00.139602 1219144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:36:00.139846 1219144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:36:00.140643 1219144 out.go:352] Setting JSON to false
	I0414 13:36:00.141956 1219144 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19107,"bootTime":1744618653,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:36:00.142138 1219144 start.go:139] virtualization: kvm guest
	I0414 13:36:00.145263 1219144 out.go:177] * [old-k8s-version-966509] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:36:00.147627 1219144 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 13:36:00.147638 1219144 notify.go:220] Checking for updates...
	I0414 13:36:00.150894 1219144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:36:00.152812 1219144 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:36:00.154339 1219144 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:36:00.155957 1219144 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:36:00.157939 1219144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:36:00.160114 1219144 config.go:182] Loaded profile config "cert-expiration-737652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:36:00.160256 1219144 config.go:182] Loaded profile config "kubernetes-upgrade-225418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:36:00.160401 1219144 config.go:182] Loaded profile config "pause-527439": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:36:00.160527 1219144 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:36:00.204907 1219144 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 13:36:00.206916 1219144 start.go:297] selected driver: kvm2
	I0414 13:36:00.206986 1219144 start.go:901] validating driver "kvm2" against <nil>
	I0414 13:36:00.207036 1219144 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:36:00.208705 1219144 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:00.208817 1219144 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20384-1167927/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 13:36:00.228676 1219144 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 13:36:00.228788 1219144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 13:36:00.229168 1219144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:36:00.229233 1219144 cni.go:84] Creating CNI manager for ""
	I0414 13:36:00.229308 1219144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:36:00.229321 1219144 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 13:36:00.229420 1219144 start.go:340] cluster config:
	{Name:old-k8s-version-966509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:36:00.229595 1219144 iso.go:125] acquiring lock: {Name:mk8f809cb461c81c5ffa481f4cedc4ad92252720 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:36:00.231859 1219144 out.go:177] * Starting "old-k8s-version-966509" primary control-plane node in "old-k8s-version-966509" cluster
	I0414 13:36:00.233252 1219144 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 13:36:00.233337 1219144 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 13:36:00.233353 1219144 cache.go:56] Caching tarball of preloaded images
	I0414 13:36:00.233514 1219144 preload.go:172] Found /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 13:36:00.233533 1219144 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 13:36:00.233682 1219144 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/config.json ...
	I0414 13:36:00.233713 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/config.json: {Name:mk6d5cf9c8ef340843328dd134e2afa2160223c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:00.233936 1219144 start.go:360] acquireMachinesLock for old-k8s-version-966509: {Name:mk1c744e856a4885e6214d755048926590b4b12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 13:36:17.209722 1219144 start.go:364] duration metric: took 16.975744775s to acquireMachinesLock for "old-k8s-version-966509"
	I0414 13:36:17.209816 1219144 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-966509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:36:17.209934 1219144 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 13:36:17.212303 1219144 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 13:36:17.212504 1219144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:36:17.212556 1219144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:36:17.231036 1219144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0414 13:36:17.231705 1219144 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:36:17.232304 1219144 main.go:141] libmachine: Using API Version  1
	I0414 13:36:17.232326 1219144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:36:17.232741 1219144 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:36:17.232974 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetMachineName
	I0414 13:36:17.233157 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:36:17.233347 1219144 start.go:159] libmachine.API.Create for "old-k8s-version-966509" (driver="kvm2")
	I0414 13:36:17.233395 1219144 client.go:168] LocalClient.Create starting
	I0414 13:36:17.233439 1219144 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem
	I0414 13:36:17.233488 1219144 main.go:141] libmachine: Decoding PEM data...
	I0414 13:36:17.233510 1219144 main.go:141] libmachine: Parsing certificate...
	I0414 13:36:17.233593 1219144 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem
	I0414 13:36:17.233634 1219144 main.go:141] libmachine: Decoding PEM data...
	I0414 13:36:17.233653 1219144 main.go:141] libmachine: Parsing certificate...
	I0414 13:36:17.233685 1219144 main.go:141] libmachine: Running pre-create checks...
	I0414 13:36:17.233700 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .PreCreateCheck
	I0414 13:36:17.234160 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetConfigRaw
	I0414 13:36:17.234643 1219144 main.go:141] libmachine: Creating machine...
	I0414 13:36:17.234658 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .Create
	I0414 13:36:17.234847 1219144 main.go:141] libmachine: (old-k8s-version-966509) creating KVM machine...
	I0414 13:36:17.234868 1219144 main.go:141] libmachine: (old-k8s-version-966509) creating network...
	I0414 13:36:17.236881 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found existing default KVM network
	I0414 13:36:17.238507 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:17.238242 1219249 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b3:27:af} reservation:<nil>}
	I0414 13:36:17.239463 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:17.239346 1219249 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:75:d0:0f} reservation:<nil>}
	I0414 13:36:17.241026 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:17.240804 1219249 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000370550}
	I0414 13:36:17.241060 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | created network xml: 
	I0414 13:36:17.241073 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | <network>
	I0414 13:36:17.241083 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG |   <name>mk-old-k8s-version-966509</name>
	I0414 13:36:17.241093 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG |   <dns enable='no'/>
	I0414 13:36:17.241099 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG |   
	I0414 13:36:17.241108 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0414 13:36:17.241123 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG |     <dhcp>
	I0414 13:36:17.241132 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0414 13:36:17.241149 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG |     </dhcp>
	I0414 13:36:17.241196 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG |   </ip>
	I0414 13:36:17.241224 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG |   
	I0414 13:36:17.241239 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | </network>
	I0414 13:36:17.241251 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | 
	I0414 13:36:17.247715 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | trying to create private KVM network mk-old-k8s-version-966509 192.168.61.0/24...
	I0414 13:36:17.346500 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | private KVM network mk-old-k8s-version-966509 192.168.61.0/24 created
	I0414 13:36:17.346537 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:17.346421 1219249 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:36:17.346572 1219144 main.go:141] libmachine: (old-k8s-version-966509) setting up store path in /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509 ...
	I0414 13:36:17.346585 1219144 main.go:141] libmachine: (old-k8s-version-966509) building disk image from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 13:36:17.346605 1219144 main.go:141] libmachine: (old-k8s-version-966509) Downloading /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 13:36:17.661917 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:17.661717 1219249 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa...
	I0414 13:36:17.921220 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:17.921081 1219249 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/old-k8s-version-966509.rawdisk...
	I0414 13:36:17.921253 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | Writing magic tar header
	I0414 13:36:17.921266 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | Writing SSH key tar header
	I0414 13:36:17.921392 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:17.921257 1219249 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509 ...
	I0414 13:36:17.921430 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509
	I0414 13:36:17.921453 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines
	I0414 13:36:17.921466 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:36:17.921475 1219144 main.go:141] libmachine: (old-k8s-version-966509) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509 (perms=drwx------)
	I0414 13:36:17.921498 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927
	I0414 13:36:17.921526 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 13:36:17.921539 1219144 main.go:141] libmachine: (old-k8s-version-966509) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines (perms=drwxr-xr-x)
	I0414 13:36:17.921553 1219144 main.go:141] libmachine: (old-k8s-version-966509) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube (perms=drwxr-xr-x)
	I0414 13:36:17.921569 1219144 main.go:141] libmachine: (old-k8s-version-966509) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927 (perms=drwxrwxr-x)
	I0414 13:36:17.921577 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | checking permissions on dir: /home/jenkins
	I0414 13:36:17.921592 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | checking permissions on dir: /home
	I0414 13:36:17.921603 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | skipping /home - not owner
	I0414 13:36:17.921614 1219144 main.go:141] libmachine: (old-k8s-version-966509) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 13:36:17.921623 1219144 main.go:141] libmachine: (old-k8s-version-966509) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 13:36:17.921629 1219144 main.go:141] libmachine: (old-k8s-version-966509) creating domain...
	I0414 13:36:17.922969 1219144 main.go:141] libmachine: (old-k8s-version-966509) define libvirt domain using xml: 
	I0414 13:36:17.922999 1219144 main.go:141] libmachine: (old-k8s-version-966509) <domain type='kvm'>
	I0414 13:36:17.923009 1219144 main.go:141] libmachine: (old-k8s-version-966509)   <name>old-k8s-version-966509</name>
	I0414 13:36:17.923018 1219144 main.go:141] libmachine: (old-k8s-version-966509)   <memory unit='MiB'>2200</memory>
	I0414 13:36:17.923027 1219144 main.go:141] libmachine: (old-k8s-version-966509)   <vcpu>2</vcpu>
	I0414 13:36:17.923034 1219144 main.go:141] libmachine: (old-k8s-version-966509)   <features>
	I0414 13:36:17.923049 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <acpi/>
	I0414 13:36:17.923060 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <apic/>
	I0414 13:36:17.923072 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <pae/>
	I0414 13:36:17.923079 1219144 main.go:141] libmachine: (old-k8s-version-966509)     
	I0414 13:36:17.923087 1219144 main.go:141] libmachine: (old-k8s-version-966509)   </features>
	I0414 13:36:17.923095 1219144 main.go:141] libmachine: (old-k8s-version-966509)   <cpu mode='host-passthrough'>
	I0414 13:36:17.923111 1219144 main.go:141] libmachine: (old-k8s-version-966509)   
	I0414 13:36:17.923118 1219144 main.go:141] libmachine: (old-k8s-version-966509)   </cpu>
	I0414 13:36:17.923126 1219144 main.go:141] libmachine: (old-k8s-version-966509)   <os>
	I0414 13:36:17.923139 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <type>hvm</type>
	I0414 13:36:17.923150 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <boot dev='cdrom'/>
	I0414 13:36:17.923158 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <boot dev='hd'/>
	I0414 13:36:17.923169 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <bootmenu enable='no'/>
	I0414 13:36:17.923173 1219144 main.go:141] libmachine: (old-k8s-version-966509)   </os>
	I0414 13:36:17.923178 1219144 main.go:141] libmachine: (old-k8s-version-966509)   <devices>
	I0414 13:36:17.923185 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <disk type='file' device='cdrom'>
	I0414 13:36:17.923206 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/boot2docker.iso'/>
	I0414 13:36:17.923222 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <target dev='hdc' bus='scsi'/>
	I0414 13:36:17.923234 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <readonly/>
	I0414 13:36:17.923240 1219144 main.go:141] libmachine: (old-k8s-version-966509)     </disk>
	I0414 13:36:17.923272 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <disk type='file' device='disk'>
	I0414 13:36:17.923286 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 13:36:17.923332 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/old-k8s-version-966509.rawdisk'/>
	I0414 13:36:17.923356 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <target dev='hda' bus='virtio'/>
	I0414 13:36:17.923362 1219144 main.go:141] libmachine: (old-k8s-version-966509)     </disk>
	I0414 13:36:17.923370 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <interface type='network'>
	I0414 13:36:17.923396 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <source network='mk-old-k8s-version-966509'/>
	I0414 13:36:17.923417 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <model type='virtio'/>
	I0414 13:36:17.923427 1219144 main.go:141] libmachine: (old-k8s-version-966509)     </interface>
	I0414 13:36:17.923438 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <interface type='network'>
	I0414 13:36:17.923448 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <source network='default'/>
	I0414 13:36:17.923458 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <model type='virtio'/>
	I0414 13:36:17.923524 1219144 main.go:141] libmachine: (old-k8s-version-966509)     </interface>
	I0414 13:36:17.923560 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <serial type='pty'>
	I0414 13:36:17.923575 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <target port='0'/>
	I0414 13:36:17.923588 1219144 main.go:141] libmachine: (old-k8s-version-966509)     </serial>
	I0414 13:36:17.923598 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <console type='pty'>
	I0414 13:36:17.923610 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <target type='serial' port='0'/>
	I0414 13:36:17.923620 1219144 main.go:141] libmachine: (old-k8s-version-966509)     </console>
	I0414 13:36:17.923648 1219144 main.go:141] libmachine: (old-k8s-version-966509)     <rng model='virtio'>
	I0414 13:36:17.923689 1219144 main.go:141] libmachine: (old-k8s-version-966509)       <backend model='random'>/dev/random</backend>
	I0414 13:36:17.923705 1219144 main.go:141] libmachine: (old-k8s-version-966509)     </rng>
	I0414 13:36:17.923715 1219144 main.go:141] libmachine: (old-k8s-version-966509)     
	I0414 13:36:17.923727 1219144 main.go:141] libmachine: (old-k8s-version-966509)     
	I0414 13:36:17.923736 1219144 main.go:141] libmachine: (old-k8s-version-966509)   </devices>
	I0414 13:36:17.923757 1219144 main.go:141] libmachine: (old-k8s-version-966509) </domain>
	I0414 13:36:17.923789 1219144 main.go:141] libmachine: (old-k8s-version-966509) 
	I0414 13:36:17.928686 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:f0:58:df in network default
	I0414 13:36:17.929719 1219144 main.go:141] libmachine: (old-k8s-version-966509) starting domain...
	I0414 13:36:17.929757 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:17.929766 1219144 main.go:141] libmachine: (old-k8s-version-966509) ensuring networks are active...
	I0414 13:36:17.930843 1219144 main.go:141] libmachine: (old-k8s-version-966509) Ensuring network default is active
	I0414 13:36:17.931359 1219144 main.go:141] libmachine: (old-k8s-version-966509) Ensuring network mk-old-k8s-version-966509 is active
	I0414 13:36:17.932354 1219144 main.go:141] libmachine: (old-k8s-version-966509) getting domain XML...
	I0414 13:36:17.933334 1219144 main.go:141] libmachine: (old-k8s-version-966509) creating domain...
	I0414 13:36:19.473439 1219144 main.go:141] libmachine: (old-k8s-version-966509) waiting for IP...
	I0414 13:36:19.474635 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:19.475144 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:19.475207 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:19.475144 1219249 retry.go:31] will retry after 195.318369ms: waiting for domain to come up
	I0414 13:36:19.673092 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:19.674176 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:19.674203 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:19.674052 1219249 retry.go:31] will retry after 382.756079ms: waiting for domain to come up
	I0414 13:36:20.059029 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:20.059994 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:20.060021 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:20.059871 1219249 retry.go:31] will retry after 466.739415ms: waiting for domain to come up
	I0414 13:36:20.528720 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:20.529766 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:20.529792 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:20.529651 1219249 retry.go:31] will retry after 494.450507ms: waiting for domain to come up
	I0414 13:36:21.026799 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:21.027513 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:21.027540 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:21.027503 1219249 retry.go:31] will retry after 591.922694ms: waiting for domain to come up
	I0414 13:36:21.621805 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:21.622437 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:21.622470 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:21.622389 1219249 retry.go:31] will retry after 779.120314ms: waiting for domain to come up
	I0414 13:36:22.403017 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:22.403781 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:22.403837 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:22.403743 1219249 retry.go:31] will retry after 876.200798ms: waiting for domain to come up
	I0414 13:36:23.281271 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:23.281888 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:23.281918 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:23.281844 1219249 retry.go:31] will retry after 997.606309ms: waiting for domain to come up
	I0414 13:36:24.281065 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:24.281622 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:24.281670 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:24.281580 1219249 retry.go:31] will retry after 1.290691803s: waiting for domain to come up
	I0414 13:36:25.573658 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:25.574260 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:25.574287 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:25.574218 1219249 retry.go:31] will retry after 2.126351442s: waiting for domain to come up
	I0414 13:36:27.702447 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:27.703163 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:27.703196 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:27.703129 1219249 retry.go:31] will retry after 2.120286187s: waiting for domain to come up
	I0414 13:36:29.826027 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:29.826717 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:29.826752 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:29.826641 1219249 retry.go:31] will retry after 3.284505283s: waiting for domain to come up
	I0414 13:36:33.113297 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:33.114131 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:33.114164 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:33.114013 1219249 retry.go:31] will retry after 3.692465087s: waiting for domain to come up
	I0414 13:36:36.809018 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:36.809726 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:36:36.809758 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:36:36.809657 1219249 retry.go:31] will retry after 5.343638536s: waiting for domain to come up
	I0414 13:36:42.156808 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.157412 1219144 main.go:141] libmachine: (old-k8s-version-966509) found domain IP: 192.168.61.227
	I0414 13:36:42.157441 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has current primary IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.157451 1219144 main.go:141] libmachine: (old-k8s-version-966509) reserving static IP address...
	I0414 13:36:42.157937 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-966509", mac: "52:54:00:3e:6b:50", ip: "192.168.61.227"} in network mk-old-k8s-version-966509
	I0414 13:36:42.269166 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | Getting to WaitForSSH function...
	I0414 13:36:42.269217 1219144 main.go:141] libmachine: (old-k8s-version-966509) reserved static IP address 192.168.61.227 for domain old-k8s-version-966509
	I0414 13:36:42.269232 1219144 main.go:141] libmachine: (old-k8s-version-966509) waiting for SSH...
	I0414 13:36:42.272222 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.272914 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:42.272953 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.273150 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | Using SSH client type: external
	I0414 13:36:42.273186 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | Using SSH private key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa (-rw-------)
	I0414 13:36:42.273222 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 13:36:42.273236 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | About to run SSH command:
	I0414 13:36:42.273251 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | exit 0
	I0414 13:36:42.408034 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | SSH cmd err, output: <nil>: 
	I0414 13:36:42.408403 1219144 main.go:141] libmachine: (old-k8s-version-966509) KVM machine creation complete
	I0414 13:36:42.408728 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetConfigRaw
	I0414 13:36:42.409445 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:36:42.409726 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:36:42.409953 1219144 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 13:36:42.409971 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetState
	I0414 13:36:42.411446 1219144 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 13:36:42.411461 1219144 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 13:36:42.411467 1219144 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 13:36:42.411473 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:36:42.414601 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.415033 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:42.415066 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.415286 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:36:42.415533 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:42.415740 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:42.415907 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:36:42.416158 1219144 main.go:141] libmachine: Using SSH client type: native
	I0414 13:36:42.416457 1219144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0414 13:36:42.416479 1219144 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 13:36:42.531742 1219144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:36:42.531788 1219144 main.go:141] libmachine: Detecting the provisioner...
	I0414 13:36:42.531801 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:36:42.535965 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.536599 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:42.536646 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.536865 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:36:42.537103 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:42.537305 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:42.537512 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:36:42.537853 1219144 main.go:141] libmachine: Using SSH client type: native
	I0414 13:36:42.538159 1219144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0414 13:36:42.538174 1219144 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 13:36:42.657120 1219144 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 13:36:42.657241 1219144 main.go:141] libmachine: found compatible host: buildroot
	I0414 13:36:42.657250 1219144 main.go:141] libmachine: Provisioning with buildroot...
	I0414 13:36:42.657285 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetMachineName
	I0414 13:36:42.657657 1219144 buildroot.go:166] provisioning hostname "old-k8s-version-966509"
	I0414 13:36:42.657682 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetMachineName
	I0414 13:36:42.657930 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:36:42.661572 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.662002 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:42.662029 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.662324 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:36:42.662593 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:42.662791 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:42.662996 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:36:42.663154 1219144 main.go:141] libmachine: Using SSH client type: native
	I0414 13:36:42.663469 1219144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0414 13:36:42.663492 1219144 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-966509 && echo "old-k8s-version-966509" | sudo tee /etc/hostname
	I0414 13:36:42.803676 1219144 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-966509
	
	I0414 13:36:42.803705 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:36:42.807618 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.808250 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:42.808293 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.808593 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:36:42.808883 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:42.809111 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:42.809320 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:36:42.809564 1219144 main.go:141] libmachine: Using SSH client type: native
	I0414 13:36:42.809816 1219144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0414 13:36:42.809839 1219144 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-966509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-966509/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-966509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:36:42.944533 1219144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:36:42.944576 1219144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20384-1167927/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-1167927/.minikube}
	I0414 13:36:42.944614 1219144 buildroot.go:174] setting up certificates
	I0414 13:36:42.944630 1219144 provision.go:84] configureAuth start
	I0414 13:36:42.944646 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetMachineName
	I0414 13:36:42.945052 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetIP
	I0414 13:36:42.948705 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.949105 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:42.949152 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.949296 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:36:42.952684 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.953130 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:42.953153 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:42.953375 1219144 provision.go:143] copyHostCerts
	I0414 13:36:42.953443 1219144 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem, removing ...
	I0414 13:36:42.953458 1219144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem
	I0414 13:36:42.953520 1219144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem (1123 bytes)
	I0414 13:36:42.953636 1219144 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem, removing ...
	I0414 13:36:42.953650 1219144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem
	I0414 13:36:42.953675 1219144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem (1675 bytes)
	I0414 13:36:42.953748 1219144 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem, removing ...
	I0414 13:36:42.953759 1219144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem
	I0414 13:36:42.953785 1219144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem (1078 bytes)
	I0414 13:36:42.953850 1219144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-966509 san=[127.0.0.1 192.168.61.227 localhost minikube old-k8s-version-966509]
	I0414 13:36:43.091536 1219144 provision.go:177] copyRemoteCerts
	I0414 13:36:43.091605 1219144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:36:43.091637 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:36:43.095319 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.095711 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:43.095738 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.095980 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:36:43.096210 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:43.096447 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:36:43.096592 1219144 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa Username:docker}
	I0414 13:36:43.216124 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:36:43.253785 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 13:36:43.291623 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 13:36:43.325815 1219144 provision.go:87] duration metric: took 381.165584ms to configureAuth
	I0414 13:36:43.325854 1219144 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:36:43.326111 1219144 config.go:182] Loaded profile config "old-k8s-version-966509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 13:36:43.326218 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:36:43.329856 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.330339 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:43.330377 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.330812 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:36:43.331145 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:43.331389 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:43.331643 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:36:43.331928 1219144 main.go:141] libmachine: Using SSH client type: native
	I0414 13:36:43.332248 1219144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0414 13:36:43.332276 1219144 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:36:43.615347 1219144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:36:43.615377 1219144 main.go:141] libmachine: Checking connection to Docker...
	I0414 13:36:43.615389 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetURL
	I0414 13:36:43.617034 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | using libvirt version 6000000
	I0414 13:36:43.620409 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.620925 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:43.620958 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.621170 1219144 main.go:141] libmachine: Docker is up and running!
	I0414 13:36:43.621191 1219144 main.go:141] libmachine: Reticulating splines...
	I0414 13:36:43.621201 1219144 client.go:171] duration metric: took 26.387793744s to LocalClient.Create
	I0414 13:36:43.621233 1219144 start.go:167] duration metric: took 26.387890065s to libmachine.API.Create "old-k8s-version-966509"
	I0414 13:36:43.621246 1219144 start.go:293] postStartSetup for "old-k8s-version-966509" (driver="kvm2")
	I0414 13:36:43.621259 1219144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:36:43.621286 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:36:43.621582 1219144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:36:43.621622 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:36:43.624548 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.625094 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:43.625125 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.625417 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:36:43.625658 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:43.625852 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:36:43.626031 1219144 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa Username:docker}
	I0414 13:36:43.724096 1219144 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:36:43.736063 1219144 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:36:43.736104 1219144 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/addons for local assets ...
	I0414 13:36:43.736182 1219144 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/files for local assets ...
	I0414 13:36:43.736322 1219144 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem -> 11757462.pem in /etc/ssl/certs
	I0414 13:36:43.736468 1219144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:36:43.751738 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:36:43.789086 1219144 start.go:296] duration metric: took 167.822507ms for postStartSetup
	I0414 13:36:43.789151 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetConfigRaw
	I0414 13:36:43.792739 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetIP
	I0414 13:36:43.799464 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.800264 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:43.800334 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.800878 1219144 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/config.json ...
	I0414 13:36:43.801251 1219144 start.go:128] duration metric: took 26.591292999s to createHost
	I0414 13:36:43.801298 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:36:43.805758 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.808231 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:43.808270 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.808451 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:36:43.808994 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:43.809322 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:43.809565 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:36:43.809821 1219144 main.go:141] libmachine: Using SSH client type: native
	I0414 13:36:43.810143 1219144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0414 13:36:43.810158 1219144 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:36:43.934378 1219144 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744637803.918160069
	
	I0414 13:36:43.934411 1219144 fix.go:216] guest clock: 1744637803.918160069
	I0414 13:36:43.934425 1219144 fix.go:229] Guest: 2025-04-14 13:36:43.918160069 +0000 UTC Remote: 2025-04-14 13:36:43.801274174 +0000 UTC m=+43.706392311 (delta=116.885895ms)
	I0414 13:36:43.934488 1219144 fix.go:200] guest clock delta is within tolerance: 116.885895ms
	I0414 13:36:43.934496 1219144 start.go:83] releasing machines lock for "old-k8s-version-966509", held for 26.724727502s
	I0414 13:36:43.934541 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:36:43.934895 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetIP
	I0414 13:36:43.938671 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.939160 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:43.939196 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.939494 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:36:43.940254 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:36:43.940533 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:36:43.940671 1219144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:36:43.940767 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:36:43.940868 1219144 ssh_runner.go:195] Run: cat /version.json
	I0414 13:36:43.940901 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:36:43.948033 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.948067 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.948745 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:43.948787 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.948937 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:43.948970 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:43.949035 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:36:43.949333 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:43.949413 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:36:43.949536 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:36:43.949828 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:36:43.949730 1219144 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa Username:docker}
	I0414 13:36:43.950029 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:36:43.950266 1219144 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa Username:docker}
	I0414 13:36:44.042574 1219144 ssh_runner.go:195] Run: systemctl --version
	I0414 13:36:44.065592 1219144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:36:44.245331 1219144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:36:44.258195 1219144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:36:44.258305 1219144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:36:44.276954 1219144 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 13:36:44.276987 1219144 start.go:495] detecting cgroup driver to use...
	I0414 13:36:44.277079 1219144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:36:44.308092 1219144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:36:44.329156 1219144 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:36:44.329233 1219144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:36:44.351793 1219144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:36:44.373409 1219144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:36:44.532735 1219144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:36:44.751084 1219144 docker.go:233] disabling docker service ...
	I0414 13:36:44.751169 1219144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:36:44.772639 1219144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:36:44.792519 1219144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:36:45.030190 1219144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:36:45.226994 1219144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:36:45.254582 1219144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:36:45.280680 1219144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 13:36:45.280752 1219144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:36:45.297958 1219144 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:36:45.298054 1219144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:36:45.330986 1219144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:36:45.353479 1219144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:36:45.373383 1219144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:36:45.392458 1219144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:36:45.411607 1219144 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 13:36:45.411728 1219144 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 13:36:45.435105 1219144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:36:45.452718 1219144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:36:45.601597 1219144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:36:45.704903 1219144 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:36:45.704994 1219144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:36:45.710555 1219144 start.go:563] Will wait 60s for crictl version
	I0414 13:36:45.710656 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:45.715950 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:36:45.764319 1219144 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:36:45.764424 1219144 ssh_runner.go:195] Run: crio --version
	I0414 13:36:45.799041 1219144 ssh_runner.go:195] Run: crio --version
	I0414 13:36:45.832921 1219144 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 13:36:45.834632 1219144 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetIP
	I0414 13:36:45.838350 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:45.838893 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:36:45.838933 1219144 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:36:45.839180 1219144 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 13:36:45.843466 1219144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:36:45.857568 1219144 kubeadm.go:883] updating cluster {Name:old-k8s-version-966509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:36:45.857712 1219144 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 13:36:45.857774 1219144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:36:45.901114 1219144 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 13:36:45.901204 1219144 ssh_runner.go:195] Run: which lz4
	I0414 13:36:45.906068 1219144 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:36:45.911106 1219144 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:36:45.911137 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 13:36:47.705817 1219144 crio.go:462] duration metric: took 1.799793737s to copy over tarball
	I0414 13:36:47.705903 1219144 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:36:50.724160 1219144 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.018229212s)
	I0414 13:36:50.724190 1219144 crio.go:469] duration metric: took 3.018337559s to extract the tarball
	I0414 13:36:50.724210 1219144 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:36:50.771017 1219144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:36:50.818714 1219144 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 13:36:50.818746 1219144 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 13:36:50.818836 1219144 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:36:50.818861 1219144 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:50.818894 1219144 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:50.818922 1219144 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 13:36:50.818931 1219144 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:50.818958 1219144 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:50.818995 1219144 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:50.819036 1219144 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 13:36:50.820619 1219144 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:50.820647 1219144 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:36:50.820689 1219144 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:50.820658 1219144 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:50.821205 1219144 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 13:36:50.821221 1219144 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 13:36:50.821232 1219144 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:50.821235 1219144 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:50.970900 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:50.985819 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:50.985967 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:50.992645 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:50.993132 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:50.997076 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 13:36:51.030948 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 13:36:51.085970 1219144 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 13:36:51.086059 1219144 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:51.086120 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.146664 1219144 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 13:36:51.146725 1219144 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:51.146793 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.147926 1219144 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 13:36:51.147981 1219144 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:51.148049 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.178493 1219144 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 13:36:51.178557 1219144 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:51.178496 1219144 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 13:36:51.178610 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.178625 1219144 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:51.178693 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.182325 1219144 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 13:36:51.182382 1219144 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 13:36:51.182433 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.186121 1219144 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 13:36:51.186179 1219144 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 13:36:51.186217 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:51.186227 1219144 ssh_runner.go:195] Run: which crictl
	I0414 13:36:51.186408 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:51.186415 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:51.190207 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:51.190305 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:51.198572 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:36:51.313867 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:51.313878 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:36:51.330571 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:51.336437 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:51.348184 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:51.348223 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:51.348282 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:36:51.508370 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:36:51.508420 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:36:51.508388 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:36:51.508482 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:36:51.524594 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:36:51.524673 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:36:51.524690 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:36:51.668769 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 13:36:51.668840 1219144 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:36:51.686575 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 13:36:51.686645 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 13:36:51.686688 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 13:36:51.695209 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 13:36:51.695258 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 13:36:51.716304 1219144 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 13:36:52.738660 1219144 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:36:52.883689 1219144 cache_images.go:92] duration metric: took 2.064921628s to LoadCachedImages
	W0414 13:36:52.883800 1219144 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0414 13:36:52.883818 1219144 kubeadm.go:934] updating node { 192.168.61.227 8443 v1.20.0 crio true true} ...
	I0414 13:36:52.883931 1219144 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-966509 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 13:36:52.884022 1219144 ssh_runner.go:195] Run: crio config
	I0414 13:36:52.936981 1219144 cni.go:84] Creating CNI manager for ""
	I0414 13:36:52.937016 1219144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:36:52.937032 1219144 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:36:52.937063 1219144 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-966509 NodeName:old-k8s-version-966509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 13:36:52.937242 1219144 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-966509"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:36:52.937349 1219144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 13:36:52.948058 1219144 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:36:52.948152 1219144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:36:52.958374 1219144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0414 13:36:52.977198 1219144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:36:52.996256 1219144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0414 13:36:53.019522 1219144 ssh_runner.go:195] Run: grep 192.168.61.227	control-plane.minikube.internal$ /etc/hosts
	I0414 13:36:53.024544 1219144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:36:53.039316 1219144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:36:53.161921 1219144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:36:53.180557 1219144 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509 for IP: 192.168.61.227
	I0414 13:36:53.180584 1219144 certs.go:194] generating shared ca certs ...
	I0414 13:36:53.180615 1219144 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.180797 1219144 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 13:36:53.180835 1219144 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 13:36:53.180846 1219144 certs.go:256] generating profile certs ...
	I0414 13:36:53.180903 1219144 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.key
	I0414 13:36:53.180916 1219144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.crt with IP's: []
	I0414 13:36:53.338412 1219144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.crt ...
	I0414 13:36:53.338447 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.crt: {Name:mk7b24a388c6ee9adbc0642aae7bc1daf3ab8786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.338672 1219144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.key ...
	I0414 13:36:53.338696 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.key: {Name:mk8647deffad8dec1bd3919a89d9a17086df5abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.338806 1219144 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key.c3cdf645
	I0414 13:36:53.338831 1219144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt.c3cdf645 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.227]
	I0414 13:36:53.586142 1219144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt.c3cdf645 ...
	I0414 13:36:53.586184 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt.c3cdf645: {Name:mk043f45a7c3125abf9a19446894d9548a4ae0a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.595916 1219144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key.c3cdf645 ...
	I0414 13:36:53.595965 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key.c3cdf645: {Name:mke70fbb4c1194bab5d0b89416f347c6874d9bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.596117 1219144 certs.go:381] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt.c3cdf645 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt
	I0414 13:36:53.596234 1219144 certs.go:385] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key.c3cdf645 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key
	I0414 13:36:53.596319 1219144 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.key
	I0414 13:36:53.596349 1219144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.crt with IP's: []
	I0414 13:36:53.752655 1219144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.crt ...
	I0414 13:36:53.752691 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.crt: {Name:mk598716f9ad4ef551c2a36e028320375e528cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.766007 1219144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.key ...
	I0414 13:36:53.766052 1219144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.key: {Name:mka6774a43395a2ea38d7bfe258f08f4a4f5a394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:36:53.766358 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem (1338 bytes)
	W0414 13:36:53.766413 1219144 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746_empty.pem, impossibly tiny 0 bytes
	I0414 13:36:53.766426 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:36:53.766458 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:36:53.766487 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:36:53.766520 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 13:36:53.766585 1219144 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:36:53.767520 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:36:53.796003 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 13:36:53.824604 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:36:53.853438 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:36:53.882034 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 13:36:53.910641 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 13:36:53.940189 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:36:53.966542 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 13:36:53.994424 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:36:54.030193 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem --> /usr/share/ca-certificates/1175746.pem (1338 bytes)
	I0414 13:36:54.063017 1219144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /usr/share/ca-certificates/11757462.pem (1708 bytes)
	I0414 13:36:54.096810 1219144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:36:54.125560 1219144 ssh_runner.go:195] Run: openssl version
	I0414 13:36:54.135129 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:36:54.149872 1219144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:36:54.154881 1219144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:36:54.154976 1219144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:36:54.161441 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:36:54.173596 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1175746.pem && ln -fs /usr/share/ca-certificates/1175746.pem /etc/ssl/certs/1175746.pem"
	I0414 13:36:54.186345 1219144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1175746.pem
	I0414 13:36:54.191959 1219144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:27 /usr/share/ca-certificates/1175746.pem
	I0414 13:36:54.192053 1219144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1175746.pem
	I0414 13:36:54.198844 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1175746.pem /etc/ssl/certs/51391683.0"
	I0414 13:36:54.213143 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11757462.pem && ln -fs /usr/share/ca-certificates/11757462.pem /etc/ssl/certs/11757462.pem"
	I0414 13:36:54.226906 1219144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11757462.pem
	I0414 13:36:54.231891 1219144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:27 /usr/share/ca-certificates/11757462.pem
	I0414 13:36:54.231974 1219144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11757462.pem
	I0414 13:36:54.238607 1219144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11757462.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:36:54.251081 1219144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:36:54.255749 1219144 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 13:36:54.256213 1219144 kubeadm.go:392] StartCluster: {Name:old-k8s-version-966509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:36:54.258650 1219144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:36:54.258730 1219144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:36:54.297324 1219144 cri.go:89] found id: ""
	I0414 13:36:54.297445 1219144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:36:54.311781 1219144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:36:54.323135 1219144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:36:54.333661 1219144 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:36:54.333691 1219144 kubeadm.go:157] found existing configuration files:
	
	I0414 13:36:54.333740 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:36:54.343740 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:36:54.343822 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:36:54.355368 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:36:54.371034 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:36:54.371114 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:36:54.383265 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:36:54.397316 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:36:54.397382 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:36:54.408217 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:36:54.419226 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:36:54.419326 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:36:54.430574 1219144 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:36:54.575605 1219144 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 13:36:54.575700 1219144 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:36:54.730808 1219144 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:36:54.730982 1219144 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:36:54.731162 1219144 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 13:36:54.973894 1219144 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:36:54.976187 1219144 out.go:235]   - Generating certificates and keys ...
	I0414 13:36:54.976325 1219144 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:36:54.976431 1219144 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:36:55.271931 1219144 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 13:36:55.563713 1219144 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 13:36:55.700416 1219144 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 13:36:55.889666 1219144 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 13:36:56.039956 1219144 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 13:36:56.040256 1219144 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-966509] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0414 13:36:56.272184 1219144 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 13:36:56.272445 1219144 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-966509] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0414 13:36:56.474255 1219144 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 13:36:56.754040 1219144 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 13:36:56.920664 1219144 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 13:36:56.921017 1219144 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:36:57.007139 1219144 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:36:57.278871 1219144 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:36:57.528177 1219144 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:36:57.950090 1219144 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:36:57.977755 1219144 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:36:57.979471 1219144 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:36:57.979568 1219144 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:36:58.150839 1219144 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:36:58.153461 1219144 out.go:235]   - Booting up control plane ...
	I0414 13:36:58.153624 1219144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:36:58.167983 1219144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:36:58.169501 1219144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:36:58.170951 1219144 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:36:58.177917 1219144 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 13:37:38.177579 1219144 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 13:37:38.177713 1219144 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:37:38.178044 1219144 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:37:43.178771 1219144 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:37:43.179069 1219144 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:37:53.179743 1219144 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:37:53.180032 1219144 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:38:13.180854 1219144 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:38:13.181156 1219144 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:38:53.180704 1219144 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:38:53.181001 1219144 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:38:53.181094 1219144 kubeadm.go:310] 
	I0414 13:38:53.181170 1219144 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 13:38:53.181229 1219144 kubeadm.go:310] 		timed out waiting for the condition
	I0414 13:38:53.181254 1219144 kubeadm.go:310] 
	I0414 13:38:53.181315 1219144 kubeadm.go:310] 	This error is likely caused by:
	I0414 13:38:53.181366 1219144 kubeadm.go:310] 		- The kubelet is not running
	I0414 13:38:53.181511 1219144 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 13:38:53.181522 1219144 kubeadm.go:310] 
	I0414 13:38:53.181670 1219144 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 13:38:53.181737 1219144 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 13:38:53.181783 1219144 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 13:38:53.181795 1219144 kubeadm.go:310] 
	I0414 13:38:53.181929 1219144 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 13:38:53.182074 1219144 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 13:38:53.182099 1219144 kubeadm.go:310] 
	I0414 13:38:53.182230 1219144 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 13:38:53.182351 1219144 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 13:38:53.182475 1219144 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 13:38:53.182582 1219144 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 13:38:53.182591 1219144 kubeadm.go:310] 
	I0414 13:38:53.182744 1219144 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:38:53.182879 1219144 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 13:38:53.182971 1219144 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 13:38:53.183162 1219144 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-966509] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-966509] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-966509] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-966509] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 13:38:53.183214 1219144 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 13:38:55.474794 1219144 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.291548781s)
	I0414 13:38:55.474876 1219144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:38:55.490427 1219144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:38:55.501122 1219144 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:38:55.501145 1219144 kubeadm.go:157] found existing configuration files:
	
	I0414 13:38:55.501194 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:38:55.511426 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:38:55.511491 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:38:55.522381 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:38:55.532424 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:38:55.532495 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:38:55.542331 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:38:55.552220 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:38:55.552299 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:38:55.562818 1219144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:38:55.573159 1219144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:38:55.573225 1219144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:38:55.584213 1219144 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:38:55.657642 1219144 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 13:38:55.657731 1219144 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:38:55.801190 1219144 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:38:55.801353 1219144 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:38:55.801530 1219144 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 13:38:55.999203 1219144 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:38:56.002653 1219144 out.go:235]   - Generating certificates and keys ...
	I0414 13:38:56.002785 1219144 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:38:56.002865 1219144 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:38:56.003020 1219144 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 13:38:56.003141 1219144 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 13:38:56.003253 1219144 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 13:38:56.003348 1219144 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 13:38:56.003431 1219144 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 13:38:56.003524 1219144 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 13:38:56.003642 1219144 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 13:38:56.003773 1219144 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 13:38:56.003850 1219144 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 13:38:56.003916 1219144 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:38:56.390544 1219144 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:38:56.617343 1219144 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:38:56.828025 1219144 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:38:56.943333 1219144 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:38:56.960215 1219144 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:38:56.961425 1219144 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:38:56.961505 1219144 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:38:57.121377 1219144 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:38:57.124264 1219144 out.go:235]   - Booting up control plane ...
	I0414 13:38:57.124418 1219144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:38:57.131617 1219144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:38:57.133015 1219144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:38:57.133829 1219144 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:38:57.143425 1219144 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 13:39:37.144786 1219144 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 13:39:37.145156 1219144 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:39:37.145381 1219144 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:39:42.146257 1219144 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:39:42.146505 1219144 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:39:52.147045 1219144 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:39:52.147224 1219144 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:40:12.148359 1219144 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:40:12.148600 1219144 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:40:52.148136 1219144 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:40:52.148410 1219144 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:40:52.148429 1219144 kubeadm.go:310] 
	I0414 13:40:52.148495 1219144 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 13:40:52.148546 1219144 kubeadm.go:310] 		timed out waiting for the condition
	I0414 13:40:52.148558 1219144 kubeadm.go:310] 
	I0414 13:40:52.148600 1219144 kubeadm.go:310] 	This error is likely caused by:
	I0414 13:40:52.148641 1219144 kubeadm.go:310] 		- The kubelet is not running
	I0414 13:40:52.148780 1219144 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 13:40:52.148790 1219144 kubeadm.go:310] 
	I0414 13:40:52.148974 1219144 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 13:40:52.149033 1219144 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 13:40:52.149073 1219144 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 13:40:52.149081 1219144 kubeadm.go:310] 
	I0414 13:40:52.149201 1219144 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 13:40:52.149333 1219144 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 13:40:52.149343 1219144 kubeadm.go:310] 
	I0414 13:40:52.149511 1219144 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 13:40:52.149590 1219144 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 13:40:52.149693 1219144 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 13:40:52.149808 1219144 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 13:40:52.149826 1219144 kubeadm.go:310] 
	I0414 13:40:52.150131 1219144 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:40:52.150280 1219144 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 13:40:52.150404 1219144 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 13:40:52.150489 1219144 kubeadm.go:394] duration metric: took 3m57.894675447s to StartCluster
	I0414 13:40:52.150559 1219144 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:40:52.150618 1219144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:40:52.187812 1219144 cri.go:89] found id: ""
	I0414 13:40:52.187849 1219144 logs.go:282] 0 containers: []
	W0414 13:40:52.187861 1219144 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:40:52.187870 1219144 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:40:52.187932 1219144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:40:52.223693 1219144 cri.go:89] found id: ""
	I0414 13:40:52.223727 1219144 logs.go:282] 0 containers: []
	W0414 13:40:52.223736 1219144 logs.go:284] No container was found matching "etcd"
	I0414 13:40:52.223742 1219144 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:40:52.223797 1219144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:40:52.257735 1219144 cri.go:89] found id: ""
	I0414 13:40:52.257769 1219144 logs.go:282] 0 containers: []
	W0414 13:40:52.257779 1219144 logs.go:284] No container was found matching "coredns"
	I0414 13:40:52.257785 1219144 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:40:52.257845 1219144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:40:52.296712 1219144 cri.go:89] found id: ""
	I0414 13:40:52.296742 1219144 logs.go:282] 0 containers: []
	W0414 13:40:52.296750 1219144 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:40:52.296757 1219144 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:40:52.296825 1219144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:40:52.333394 1219144 cri.go:89] found id: ""
	I0414 13:40:52.333433 1219144 logs.go:282] 0 containers: []
	W0414 13:40:52.333446 1219144 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:40:52.333465 1219144 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:40:52.333536 1219144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:40:52.371502 1219144 cri.go:89] found id: ""
	I0414 13:40:52.371539 1219144 logs.go:282] 0 containers: []
	W0414 13:40:52.371548 1219144 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:40:52.371554 1219144 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:40:52.371611 1219144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:40:52.409666 1219144 cri.go:89] found id: ""
	I0414 13:40:52.409706 1219144 logs.go:282] 0 containers: []
	W0414 13:40:52.409718 1219144 logs.go:284] No container was found matching "kindnet"
	I0414 13:40:52.409733 1219144 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:40:52.409747 1219144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:40:52.533551 1219144 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:40:52.533582 1219144 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:40:52.533599 1219144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:40:52.638677 1219144 logs.go:123] Gathering logs for container status ...
	I0414 13:40:52.638733 1219144 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:40:52.682532 1219144 logs.go:123] Gathering logs for kubelet ...
	I0414 13:40:52.682569 1219144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:40:52.755405 1219144 logs.go:123] Gathering logs for dmesg ...
	I0414 13:40:52.755457 1219144 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0414 13:40:52.770801 1219144 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 13:40:52.770921 1219144 out.go:270] * 
	* 
	W0414 13:40:52.771021 1219144 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:40:52.771046 1219144 out.go:270] * 
	* 
	W0414 13:40:52.772218 1219144 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 13:40:52.776191 1219144 out.go:201] 
	W0414 13:40:52.777832 1219144 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:40:52.777907 1219144 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 13:40:52.777955 1219144 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 13:40:52.779543 1219144 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-966509 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 6 (267.216709ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 13:40:53.097853 1222673 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-966509" does not appear in /home/jenkins/minikube-integration/20384-1167927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-966509" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (293.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-966509 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-966509 create -f testdata/busybox.yaml: exit status 1 (62.398888ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-966509" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-966509 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 6 (288.429683ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 13:40:53.446692 1222714 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-966509" does not appear in /home/jenkins/minikube-integration/20384-1167927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-966509" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 6 (271.211964ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 13:40:53.724778 1222744 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-966509" does not appear in /home/jenkins/minikube-integration/20384-1167927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-966509" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (77.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-966509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-966509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m17.114701918s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-966509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-966509 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-966509 describe deploy/metrics-server -n kube-system: exit status 1 (57.987443ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-966509" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-966509 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 6 (268.493538ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 13:42:11.168256 1223281 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-966509" does not appear in /home/jenkins/minikube-integration/20384-1167927/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-966509" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (77.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (516.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-966509 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0414 13:44:57.145746 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-966509 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m34.531764586s)

                                                
                                                
-- stdout --
	* [old-k8s-version-966509] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-966509" primary control-plane node in "old-k8s-version-966509" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-966509" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:42:14.784954 1223410 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:42:14.785292 1223410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:42:14.785305 1223410 out.go:358] Setting ErrFile to fd 2...
	I0414 13:42:14.785309 1223410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:42:14.785526 1223410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:42:14.786134 1223410 out.go:352] Setting JSON to false
	I0414 13:42:14.787608 1223410 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19482,"bootTime":1744618653,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:42:14.787791 1223410 start.go:139] virtualization: kvm guest
	I0414 13:42:14.790230 1223410 out.go:177] * [old-k8s-version-966509] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:42:14.792160 1223410 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 13:42:14.792250 1223410 notify.go:220] Checking for updates...
	I0414 13:42:14.795555 1223410 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:42:14.797683 1223410 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:42:14.799590 1223410 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:42:14.801438 1223410 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:42:14.803266 1223410 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:42:14.805686 1223410 config.go:182] Loaded profile config "old-k8s-version-966509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 13:42:14.806161 1223410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:42:14.806280 1223410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:42:14.825186 1223410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I0414 13:42:14.825827 1223410 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:42:14.826680 1223410 main.go:141] libmachine: Using API Version  1
	I0414 13:42:14.826717 1223410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:42:14.827178 1223410 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:42:14.827467 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:42:14.829833 1223410 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 13:42:14.831850 1223410 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:42:14.832261 1223410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:42:14.832353 1223410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:42:14.852191 1223410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I0414 13:42:14.852760 1223410 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:42:14.853415 1223410 main.go:141] libmachine: Using API Version  1
	I0414 13:42:14.853444 1223410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:42:14.853821 1223410 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:42:14.854053 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:42:14.899830 1223410 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 13:42:14.901618 1223410 start.go:297] selected driver: kvm2
	I0414 13:42:14.901652 1223410 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-966509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:42:14.901853 1223410 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:42:14.902707 1223410 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:42:14.902822 1223410 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20384-1167927/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 13:42:14.921563 1223410 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 13:42:14.922045 1223410 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:42:14.922089 1223410 cni.go:84] Creating CNI manager for ""
	I0414 13:42:14.922135 1223410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:42:14.922192 1223410 start.go:340] cluster config:
	{Name:old-k8s-version-966509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-966509 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:42:14.922336 1223410 iso.go:125] acquiring lock: {Name:mk8f809cb461c81c5ffa481f4cedc4ad92252720 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:42:14.924748 1223410 out.go:177] * Starting "old-k8s-version-966509" primary control-plane node in "old-k8s-version-966509" cluster
	I0414 13:42:14.926471 1223410 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 13:42:14.926560 1223410 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 13:42:14.926577 1223410 cache.go:56] Caching tarball of preloaded images
	I0414 13:42:14.926736 1223410 preload.go:172] Found /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 13:42:14.926757 1223410 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 13:42:14.926906 1223410 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/config.json ...
	I0414 13:42:14.927248 1223410 start.go:360] acquireMachinesLock for old-k8s-version-966509: {Name:mk1c744e856a4885e6214d755048926590b4b12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 13:42:14.927330 1223410 start.go:364] duration metric: took 46.792µs to acquireMachinesLock for "old-k8s-version-966509"
	I0414 13:42:14.927356 1223410 start.go:96] Skipping create...Using existing machine configuration
	I0414 13:42:14.927364 1223410 fix.go:54] fixHost starting: 
	I0414 13:42:14.927782 1223410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:42:14.927839 1223410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:42:14.945271 1223410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
	I0414 13:42:14.945824 1223410 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:42:14.948595 1223410 main.go:141] libmachine: Using API Version  1
	I0414 13:42:14.948637 1223410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:42:14.949281 1223410 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:42:14.949546 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:42:14.949793 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetState
	I0414 13:42:14.951874 1223410 fix.go:112] recreateIfNeeded on old-k8s-version-966509: state=Stopped err=<nil>
	I0414 13:42:14.951909 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	W0414 13:42:14.952148 1223410 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 13:42:14.954525 1223410 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-966509" ...
	I0414 13:42:14.955954 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .Start
	I0414 13:42:14.956329 1223410 main.go:141] libmachine: (old-k8s-version-966509) starting domain...
	I0414 13:42:14.956353 1223410 main.go:141] libmachine: (old-k8s-version-966509) ensuring networks are active...
	I0414 13:42:14.957524 1223410 main.go:141] libmachine: (old-k8s-version-966509) Ensuring network default is active
	I0414 13:42:14.958127 1223410 main.go:141] libmachine: (old-k8s-version-966509) Ensuring network mk-old-k8s-version-966509 is active
	I0414 13:42:14.958541 1223410 main.go:141] libmachine: (old-k8s-version-966509) getting domain XML...
	I0414 13:42:14.959511 1223410 main.go:141] libmachine: (old-k8s-version-966509) creating domain...
	I0414 13:42:16.388495 1223410 main.go:141] libmachine: (old-k8s-version-966509) waiting for IP...
	I0414 13:42:16.389882 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:16.390582 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:16.390732 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:16.390606 1223446 retry.go:31] will retry after 241.266846ms: waiting for domain to come up
	I0414 13:42:16.633490 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:16.634252 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:16.634283 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:16.634192 1223446 retry.go:31] will retry after 340.256546ms: waiting for domain to come up
	I0414 13:42:16.976240 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:16.977006 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:16.977039 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:16.976963 1223446 retry.go:31] will retry after 310.959814ms: waiting for domain to come up
	I0414 13:42:17.290121 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:17.290829 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:17.290860 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:17.290793 1223446 retry.go:31] will retry after 596.31341ms: waiting for domain to come up
	I0414 13:42:17.888797 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:17.889697 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:17.889757 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:17.889525 1223446 retry.go:31] will retry after 483.492304ms: waiting for domain to come up
	I0414 13:42:18.374324 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:18.374856 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:18.374881 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:18.374816 1223446 retry.go:31] will retry after 648.733299ms: waiting for domain to come up
	I0414 13:42:19.024613 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:19.025034 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:19.025072 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:19.024977 1223446 retry.go:31] will retry after 1.05721048s: waiting for domain to come up
	I0414 13:42:20.083797 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:20.084332 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:20.084362 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:20.084275 1223446 retry.go:31] will retry after 1.051681007s: waiting for domain to come up
	I0414 13:42:21.137465 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:21.138264 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:21.138296 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:21.138197 1223446 retry.go:31] will retry after 1.278448612s: waiting for domain to come up
	I0414 13:42:22.418273 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:22.418859 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:22.418888 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:22.418813 1223446 retry.go:31] will retry after 1.551554148s: waiting for domain to come up
	I0414 13:42:23.972838 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:23.973667 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:23.973707 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:23.973614 1223446 retry.go:31] will retry after 2.412225956s: waiting for domain to come up
	I0414 13:42:26.387872 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:26.388506 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:26.388538 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:26.388444 1223446 retry.go:31] will retry after 2.647386769s: waiting for domain to come up
	I0414 13:42:29.039310 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:29.039933 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:29.039970 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:29.039875 1223446 retry.go:31] will retry after 2.807706297s: waiting for domain to come up
	I0414 13:42:31.849274 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:31.849990 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | unable to find current IP address of domain old-k8s-version-966509 in network mk-old-k8s-version-966509
	I0414 13:42:31.850026 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | I0414 13:42:31.849927 1223446 retry.go:31] will retry after 4.15302276s: waiting for domain to come up
	I0414 13:42:36.005061 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.005650 1223410 main.go:141] libmachine: (old-k8s-version-966509) found domain IP: 192.168.61.227
	I0414 13:42:36.005679 1223410 main.go:141] libmachine: (old-k8s-version-966509) reserving static IP address...
	I0414 13:42:36.005755 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has current primary IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.006465 1223410 main.go:141] libmachine: (old-k8s-version-966509) reserved static IP address 192.168.61.227 for domain old-k8s-version-966509
	I0414 13:42:36.006486 1223410 main.go:141] libmachine: (old-k8s-version-966509) waiting for SSH...
	I0414 13:42:36.006507 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "old-k8s-version-966509", mac: "52:54:00:3e:6b:50", ip: "192.168.61.227"} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:36.006530 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | skip adding static IP to network mk-old-k8s-version-966509 - found existing host DHCP lease matching {name: "old-k8s-version-966509", mac: "52:54:00:3e:6b:50", ip: "192.168.61.227"}
	I0414 13:42:36.006554 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | Getting to WaitForSSH function...
	I0414 13:42:36.009474 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.010316 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:36.010358 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.010649 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | Using SSH client type: external
	I0414 13:42:36.010677 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | Using SSH private key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa (-rw-------)
	I0414 13:42:36.010698 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 13:42:36.010708 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | About to run SSH command:
	I0414 13:42:36.010720 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | exit 0
	I0414 13:42:36.132645 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | SSH cmd err, output: <nil>: 
	I0414 13:42:36.133046 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetConfigRaw
	I0414 13:42:36.133791 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetIP
	I0414 13:42:36.137395 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.137856 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:36.137906 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.138368 1223410 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/config.json ...
	I0414 13:42:36.138617 1223410 machine.go:93] provisionDockerMachine start ...
	I0414 13:42:36.138641 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:42:36.139033 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:42:36.142492 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.143038 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:36.143072 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.143398 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:42:36.143636 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:36.143873 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:36.144125 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:42:36.144316 1223410 main.go:141] libmachine: Using SSH client type: native
	I0414 13:42:36.144583 1223410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0414 13:42:36.144597 1223410 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 13:42:36.248423 1223410 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 13:42:36.248453 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetMachineName
	I0414 13:42:36.248759 1223410 buildroot.go:166] provisioning hostname "old-k8s-version-966509"
	I0414 13:42:36.248791 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetMachineName
	I0414 13:42:36.249024 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:42:36.252871 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.253435 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:36.253465 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.253707 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:42:36.253992 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:36.254274 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:36.254505 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:42:36.254710 1223410 main.go:141] libmachine: Using SSH client type: native
	I0414 13:42:36.255049 1223410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0414 13:42:36.255069 1223410 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-966509 && echo "old-k8s-version-966509" | sudo tee /etc/hostname
	I0414 13:42:36.379332 1223410 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-966509
	
	I0414 13:42:36.379373 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:42:36.382843 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.383349 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:36.383386 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.383717 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:42:36.383982 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:36.384359 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:36.384576 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:42:36.384800 1223410 main.go:141] libmachine: Using SSH client type: native
	I0414 13:42:36.385250 1223410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0414 13:42:36.385282 1223410 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-966509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-966509/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-966509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:42:36.497887 1223410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:42:36.497929 1223410 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20384-1167927/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-1167927/.minikube}
	I0414 13:42:36.497966 1223410 buildroot.go:174] setting up certificates
	I0414 13:42:36.497979 1223410 provision.go:84] configureAuth start
	I0414 13:42:36.497993 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetMachineName
	I0414 13:42:36.498362 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetIP
	I0414 13:42:36.501787 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.502309 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:36.502341 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.502603 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:42:36.505492 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.505936 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:36.505964 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.506141 1223410 provision.go:143] copyHostCerts
	I0414 13:42:36.506221 1223410 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem, removing ...
	I0414 13:42:36.506255 1223410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem
	I0414 13:42:36.506333 1223410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem (1078 bytes)
	I0414 13:42:36.506426 1223410 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem, removing ...
	I0414 13:42:36.506434 1223410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem
	I0414 13:42:36.506464 1223410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem (1123 bytes)
	I0414 13:42:36.506541 1223410 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem, removing ...
	I0414 13:42:36.506553 1223410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem
	I0414 13:42:36.506588 1223410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem (1675 bytes)
	I0414 13:42:36.506660 1223410 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-966509 san=[127.0.0.1 192.168.61.227 localhost minikube old-k8s-version-966509]
	I0414 13:42:36.586990 1223410 provision.go:177] copyRemoteCerts
	I0414 13:42:36.587061 1223410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:42:36.587089 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:42:36.590686 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.591214 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:36.591261 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.591587 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:42:36.591866 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:36.592110 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:42:36.592298 1223410 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa Username:docker}
	I0414 13:42:36.675093 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:42:36.703459 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 13:42:36.736350 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 13:42:36.764596 1223410 provision.go:87] duration metric: took 266.60192ms to configureAuth
	I0414 13:42:36.764628 1223410 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:42:36.764802 1223410 config.go:182] Loaded profile config "old-k8s-version-966509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 13:42:36.764876 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:42:36.767815 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.768356 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:36.768383 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:36.768685 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:42:36.768995 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:36.769273 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:36.769476 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:42:36.769707 1223410 main.go:141] libmachine: Using SSH client type: native
	I0414 13:42:36.769912 1223410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0414 13:42:36.769928 1223410 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:42:37.031597 1223410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:42:37.031631 1223410 machine.go:96] duration metric: took 892.998023ms to provisionDockerMachine
	I0414 13:42:37.031647 1223410 start.go:293] postStartSetup for "old-k8s-version-966509" (driver="kvm2")
	I0414 13:42:37.031688 1223410 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:42:37.031717 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:42:37.032206 1223410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:42:37.032278 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:42:37.036573 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:37.037012 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:37.037061 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:37.037330 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:42:37.037592 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:37.037805 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:42:37.038063 1223410 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa Username:docker}
	I0414 13:42:37.123305 1223410 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:42:37.128048 1223410 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:42:37.128089 1223410 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/addons for local assets ...
	I0414 13:42:37.128189 1223410 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/files for local assets ...
	I0414 13:42:37.128275 1223410 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem -> 11757462.pem in /etc/ssl/certs
	I0414 13:42:37.128368 1223410 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:42:37.139283 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:42:37.166657 1223410 start.go:296] duration metric: took 134.956968ms for postStartSetup
	I0414 13:42:37.166724 1223410 fix.go:56] duration metric: took 22.239359356s for fixHost
	I0414 13:42:37.166757 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:42:37.170561 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:37.171284 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:37.171327 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:37.171606 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:42:37.171906 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:37.172132 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:37.172389 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:42:37.172652 1223410 main.go:141] libmachine: Using SSH client type: native
	I0414 13:42:37.172872 1223410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0414 13:42:37.172884 1223410 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:42:37.277084 1223410 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744638157.244369251
	
	I0414 13:42:37.277114 1223410 fix.go:216] guest clock: 1744638157.244369251
	I0414 13:42:37.277125 1223410 fix.go:229] Guest: 2025-04-14 13:42:37.244369251 +0000 UTC Remote: 2025-04-14 13:42:37.166730851 +0000 UTC m=+22.426994720 (delta=77.6384ms)
	I0414 13:42:37.277155 1223410 fix.go:200] guest clock delta is within tolerance: 77.6384ms
	I0414 13:42:37.277163 1223410 start.go:83] releasing machines lock for "old-k8s-version-966509", held for 22.349815775s
	I0414 13:42:37.277188 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:42:37.277553 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetIP
	I0414 13:42:37.281022 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:37.281646 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:37.281688 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:37.281850 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:42:37.282781 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:42:37.283080 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .DriverName
	I0414 13:42:37.283172 1223410 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:42:37.283234 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:42:37.283369 1223410 ssh_runner.go:195] Run: cat /version.json
	I0414 13:42:37.283403 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHHostname
	I0414 13:42:37.287022 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:37.287060 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:37.287446 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:37.287485 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:37.287521 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:37.287730 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:37.287750 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:42:37.288042 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHPort
	I0414 13:42:37.288050 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:37.288232 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:42:37.288452 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHKeyPath
	I0414 13:42:37.288466 1223410 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa Username:docker}
	I0414 13:42:37.288658 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetSSHUsername
	I0414 13:42:37.288850 1223410 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/old-k8s-version-966509/id_rsa Username:docker}
	I0414 13:42:37.391948 1223410 ssh_runner.go:195] Run: systemctl --version
	I0414 13:42:37.398706 1223410 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:42:37.548443 1223410 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:42:37.556104 1223410 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:42:37.556197 1223410 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:42:37.573460 1223410 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 13:42:37.573493 1223410 start.go:495] detecting cgroup driver to use...
	I0414 13:42:37.573590 1223410 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:42:37.590618 1223410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:42:37.605007 1223410 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:42:37.605092 1223410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:42:37.621640 1223410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:42:37.637898 1223410 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:42:37.768892 1223410 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:42:37.936411 1223410 docker.go:233] disabling docker service ...
	I0414 13:42:37.936494 1223410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:42:37.951312 1223410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:42:37.966838 1223410 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:42:38.115473 1223410 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:42:38.246248 1223410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:42:38.262914 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:42:38.283488 1223410 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 13:42:38.283675 1223410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:42:38.296724 1223410 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:42:38.296826 1223410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:42:38.309118 1223410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:42:38.321450 1223410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:42:38.332936 1223410 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:42:38.344656 1223410 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:42:38.354947 1223410 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 13:42:38.355044 1223410 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 13:42:38.369831 1223410 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:42:38.381372 1223410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:42:38.499054 1223410 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:42:38.597389 1223410 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:42:38.597486 1223410 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:42:38.604727 1223410 start.go:563] Will wait 60s for crictl version
	I0414 13:42:38.604794 1223410 ssh_runner.go:195] Run: which crictl
	I0414 13:42:38.609638 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:42:38.654091 1223410 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:42:38.654194 1223410 ssh_runner.go:195] Run: crio --version
	I0414 13:42:38.684975 1223410 ssh_runner.go:195] Run: crio --version
	I0414 13:42:38.718784 1223410 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 13:42:38.720583 1223410 main.go:141] libmachine: (old-k8s-version-966509) Calling .GetIP
	I0414 13:42:38.724845 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:38.725338 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6b:50", ip: ""} in network mk-old-k8s-version-966509: {Iface:virbr3 ExpiryTime:2025-04-14 14:36:33 +0000 UTC Type:0 Mac:52:54:00:3e:6b:50 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:old-k8s-version-966509 Clientid:01:52:54:00:3e:6b:50}
	I0414 13:42:38.725374 1223410 main.go:141] libmachine: (old-k8s-version-966509) DBG | domain old-k8s-version-966509 has defined IP address 192.168.61.227 and MAC address 52:54:00:3e:6b:50 in network mk-old-k8s-version-966509
	I0414 13:42:38.725722 1223410 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 13:42:38.730551 1223410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:42:38.744442 1223410 kubeadm.go:883] updating cluster {Name:old-k8s-version-966509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:42:38.744604 1223410 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 13:42:38.744654 1223410 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:42:38.800753 1223410 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 13:42:38.800827 1223410 ssh_runner.go:195] Run: which lz4
	I0414 13:42:38.805522 1223410 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:42:38.810767 1223410 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:42:38.810806 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 13:42:40.755838 1223410 crio.go:462] duration metric: took 1.950344659s to copy over tarball
	I0414 13:42:40.755920 1223410 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:42:44.291099 1223410 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.535148596s)
	I0414 13:42:44.291131 1223410 crio.go:469] duration metric: took 3.535257465s to extract the tarball
	I0414 13:42:44.291139 1223410 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:42:44.339507 1223410 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:42:44.478457 1223410 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 13:42:44.478502 1223410 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 13:42:44.478604 1223410 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:42:44.478615 1223410 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:42:44.478652 1223410 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:42:44.478672 1223410 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:42:44.478704 1223410 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 13:42:44.478702 1223410 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:42:44.478715 1223410 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 13:42:44.478714 1223410 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:42:44.480891 1223410 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:42:44.480928 1223410 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 13:42:44.480903 1223410 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 13:42:44.481067 1223410 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:42:44.481099 1223410 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:42:44.481124 1223410 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:42:44.481129 1223410 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:42:44.481409 1223410 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:42:44.668067 1223410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 13:42:44.669831 1223410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:42:44.683627 1223410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:42:44.700671 1223410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 13:42:44.703092 1223410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:42:44.722759 1223410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:42:44.725578 1223410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 13:42:44.779378 1223410 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 13:42:44.779520 1223410 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 13:42:44.779614 1223410 ssh_runner.go:195] Run: which crictl
	I0414 13:42:44.813482 1223410 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 13:42:44.813546 1223410 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:42:44.813604 1223410 ssh_runner.go:195] Run: which crictl
	I0414 13:42:44.880050 1223410 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 13:42:44.880107 1223410 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 13:42:44.880123 1223410 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:42:44.880147 1223410 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 13:42:44.880194 1223410 ssh_runner.go:195] Run: which crictl
	I0414 13:42:44.880202 1223410 ssh_runner.go:195] Run: which crictl
	I0414 13:42:44.903233 1223410 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 13:42:44.903370 1223410 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:42:44.903443 1223410 ssh_runner.go:195] Run: which crictl
	I0414 13:42:44.903285 1223410 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 13:42:44.903520 1223410 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:42:44.903573 1223410 ssh_runner.go:195] Run: which crictl
	I0414 13:42:44.905480 1223410 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 13:42:44.905530 1223410 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 13:42:44.905574 1223410 ssh_runner.go:195] Run: which crictl
	I0414 13:42:44.905684 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:42:44.905772 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:42:44.905841 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:42:44.905908 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:42:44.910332 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:42:44.918216 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:42:45.074011 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:42:45.074011 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:42:45.074132 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:42:45.074159 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:42:45.074175 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:42:45.074253 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:42:45.079022 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:42:45.229703 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 13:42:45.229737 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 13:42:45.229742 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:42:45.245927 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 13:42:45.245982 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 13:42:45.246052 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 13:42:45.246075 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 13:42:45.388424 1223410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 13:42:45.388561 1223410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 13:42:45.393813 1223410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 13:42:45.424217 1223410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 13:42:45.424330 1223410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 13:42:45.435309 1223410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 13:42:45.435309 1223410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 13:42:45.465841 1223410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 13:42:46.124269 1223410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:42:46.277501 1223410 cache_images.go:92] duration metric: took 1.798976981s to LoadCachedImages
	W0414 13:42:46.277599 1223410 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0414 13:42:46.277614 1223410 kubeadm.go:934] updating node { 192.168.61.227 8443 v1.20.0 crio true true} ...
	I0414 13:42:46.277763 1223410 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-966509 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 13:42:46.277857 1223410 ssh_runner.go:195] Run: crio config
	I0414 13:42:46.334506 1223410 cni.go:84] Creating CNI manager for ""
	I0414 13:42:46.334530 1223410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 13:42:46.334542 1223410 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:42:46.334561 1223410 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-966509 NodeName:old-k8s-version-966509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 13:42:46.334695 1223410 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-966509"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:42:46.334768 1223410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 13:42:46.345438 1223410 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:42:46.345527 1223410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:42:46.357050 1223410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0414 13:42:46.377302 1223410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:42:46.396405 1223410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0414 13:42:46.416221 1223410 ssh_runner.go:195] Run: grep 192.168.61.227	control-plane.minikube.internal$ /etc/hosts
	I0414 13:42:46.420573 1223410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:42:46.434699 1223410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:42:46.579128 1223410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:42:46.610485 1223410 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509 for IP: 192.168.61.227
	I0414 13:42:46.610601 1223410 certs.go:194] generating shared ca certs ...
	I0414 13:42:46.610630 1223410 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:42:46.610919 1223410 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 13:42:46.610984 1223410 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 13:42:46.610998 1223410 certs.go:256] generating profile certs ...
	I0414 13:42:46.611229 1223410 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/client.key
	I0414 13:42:46.611314 1223410 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key.c3cdf645
	I0414 13:42:46.611377 1223410 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.key
	I0414 13:42:46.611509 1223410 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem (1338 bytes)
	W0414 13:42:46.611551 1223410 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746_empty.pem, impossibly tiny 0 bytes
	I0414 13:42:46.611566 1223410 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:42:46.611601 1223410 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:42:46.611631 1223410 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:42:46.611688 1223410 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 13:42:46.611750 1223410 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:42:46.612685 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:42:46.668086 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 13:42:46.707998 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:42:46.746593 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:42:46.788867 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 13:42:46.848273 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 13:42:46.897794 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:42:46.935643 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/old-k8s-version-966509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 13:42:46.963387 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:42:46.993678 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem --> /usr/share/ca-certificates/1175746.pem (1338 bytes)
	I0414 13:42:47.023795 1223410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /usr/share/ca-certificates/11757462.pem (1708 bytes)
	I0414 13:42:47.055455 1223410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:42:47.078653 1223410 ssh_runner.go:195] Run: openssl version
	I0414 13:42:47.085189 1223410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:42:47.098634 1223410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:42:47.103452 1223410 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:42:47.103532 1223410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:42:47.109692 1223410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:42:47.124008 1223410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1175746.pem && ln -fs /usr/share/ca-certificates/1175746.pem /etc/ssl/certs/1175746.pem"
	I0414 13:42:47.137193 1223410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1175746.pem
	I0414 13:42:47.142776 1223410 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:27 /usr/share/ca-certificates/1175746.pem
	I0414 13:42:47.142859 1223410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1175746.pem
	I0414 13:42:47.149969 1223410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1175746.pem /etc/ssl/certs/51391683.0"
	I0414 13:42:47.162857 1223410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11757462.pem && ln -fs /usr/share/ca-certificates/11757462.pem /etc/ssl/certs/11757462.pem"
	I0414 13:42:47.176638 1223410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11757462.pem
	I0414 13:42:47.182087 1223410 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:27 /usr/share/ca-certificates/11757462.pem
	I0414 13:42:47.182179 1223410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11757462.pem
	I0414 13:42:47.188935 1223410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11757462.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:42:47.202533 1223410 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:42:47.208405 1223410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 13:42:47.215861 1223410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 13:42:47.223256 1223410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 13:42:47.230925 1223410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 13:42:47.237612 1223410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 13:42:47.244752 1223410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 13:42:47.252105 1223410 kubeadm.go:392] StartCluster: {Name:old-k8s-version-966509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-966509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:42:47.252221 1223410 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:42:47.252294 1223410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:42:47.300707 1223410 cri.go:89] found id: ""
	I0414 13:42:47.300781 1223410 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:42:47.312342 1223410 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 13:42:47.312372 1223410 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 13:42:47.312432 1223410 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 13:42:47.325324 1223410 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 13:42:47.326234 1223410 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-966509" does not appear in /home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:42:47.326713 1223410 kubeconfig.go:62] /home/jenkins/minikube-integration/20384-1167927/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-966509" cluster setting kubeconfig missing "old-k8s-version-966509" context setting]
	I0414 13:42:47.327496 1223410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/kubeconfig: {Name:mk5eb6c4765d4c70f1db00acbce88c0952cb579b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:42:47.329311 1223410 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 13:42:47.341260 1223410 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.227
	I0414 13:42:47.341312 1223410 kubeadm.go:1160] stopping kube-system containers ...
	I0414 13:42:47.341328 1223410 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 13:42:47.341389 1223410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:42:47.381597 1223410 cri.go:89] found id: ""
	I0414 13:42:47.381686 1223410 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 13:42:47.401761 1223410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:42:47.413448 1223410 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:42:47.413475 1223410 kubeadm.go:157] found existing configuration files:
	
	I0414 13:42:47.413520 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:42:47.425033 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:42:47.425111 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:42:47.436369 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:42:47.446948 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:42:47.447033 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:42:47.459399 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:42:47.471408 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:42:47.471494 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:42:47.482271 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:42:47.492394 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:42:47.492480 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:42:47.504245 1223410 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:42:47.515685 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:42:47.916079 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:42:48.754556 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:42:49.000251 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:42:49.142443 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 13:42:49.225083 1223410 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:42:49.225187 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:49.725591 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:50.226228 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:50.726179 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:51.226301 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:51.725605 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:52.225374 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:52.725843 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:53.226224 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:53.726310 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:54.226046 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:54.725784 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:55.225759 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:55.725297 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:56.225669 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:56.726352 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:57.226114 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:57.726134 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:58.225754 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:58.726247 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:59.225412 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:42:59.726345 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:00.226249 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:00.725419 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:01.225796 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:01.725550 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:02.225935 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:02.725387 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:03.226021 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:03.726292 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:04.226209 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:04.725597 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:05.226304 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:05.726140 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:06.225585 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:06.725560 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:07.226249 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:07.725957 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:08.226145 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:08.726261 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:09.225425 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:09.725675 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:10.225735 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:10.726311 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:11.226074 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:11.725439 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:12.225544 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:12.726146 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:13.225314 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:13.725367 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:14.225733 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:14.725994 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:15.226133 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:15.725968 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:16.225297 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:16.726039 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:17.226070 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:17.725931 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:18.226326 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:18.725567 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:19.225267 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:19.726221 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:20.225425 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:20.726249 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:21.225553 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:21.725434 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:22.225690 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:22.726200 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:23.225598 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:23.725485 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:24.226269 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:24.725419 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:25.225422 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:25.725398 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:26.226362 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:26.725708 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:27.225341 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:27.726116 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:28.225449 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:28.725464 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:29.225366 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:29.726438 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:30.225354 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:30.725342 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:31.226204 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:31.726122 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:32.225694 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:32.725884 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:33.226350 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:33.726258 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:34.225529 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:34.726215 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:35.225729 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:35.726213 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:36.225968 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:36.725731 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:37.225497 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:37.726299 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:38.225667 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:38.726230 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:39.226050 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:39.725340 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:40.225687 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:40.725578 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:41.225677 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:41.726047 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:42.225790 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:42.726211 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:43.225967 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:43.725754 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:44.226165 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:44.726321 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:45.225464 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:45.726147 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:46.226112 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:46.726273 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:47.226109 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:47.725807 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:48.225985 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:48.726010 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:49.225412 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:43:49.225496 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:43:49.262249 1223410 cri.go:89] found id: ""
	I0414 13:43:49.262287 1223410 logs.go:282] 0 containers: []
	W0414 13:43:49.262300 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:43:49.262319 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:43:49.262390 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:43:49.308035 1223410 cri.go:89] found id: ""
	I0414 13:43:49.308074 1223410 logs.go:282] 0 containers: []
	W0414 13:43:49.308087 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:43:49.308095 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:43:49.308164 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:43:49.346882 1223410 cri.go:89] found id: ""
	I0414 13:43:49.346924 1223410 logs.go:282] 0 containers: []
	W0414 13:43:49.346937 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:43:49.346946 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:43:49.347012 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:43:49.385782 1223410 cri.go:89] found id: ""
	I0414 13:43:49.385817 1223410 logs.go:282] 0 containers: []
	W0414 13:43:49.385830 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:43:49.385837 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:43:49.385901 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:43:49.428609 1223410 cri.go:89] found id: ""
	I0414 13:43:49.428644 1223410 logs.go:282] 0 containers: []
	W0414 13:43:49.428654 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:43:49.428662 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:43:49.428736 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:43:49.465811 1223410 cri.go:89] found id: ""
	I0414 13:43:49.465844 1223410 logs.go:282] 0 containers: []
	W0414 13:43:49.465855 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:43:49.465863 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:43:49.465934 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:43:49.501898 1223410 cri.go:89] found id: ""
	I0414 13:43:49.501934 1223410 logs.go:282] 0 containers: []
	W0414 13:43:49.501944 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:43:49.501952 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:43:49.502021 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:43:49.542602 1223410 cri.go:89] found id: ""
	I0414 13:43:49.542645 1223410 logs.go:282] 0 containers: []
	W0414 13:43:49.542654 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:43:49.542665 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:43:49.542678 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:43:49.599030 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:43:49.599077 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:43:49.614463 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:43:49.614509 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:43:49.766131 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:43:49.766162 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:43:49.766180 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:43:49.846844 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:43:49.846894 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:43:52.395827 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:52.412827 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:43:52.412927 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:43:52.450366 1223410 cri.go:89] found id: ""
	I0414 13:43:52.450400 1223410 logs.go:282] 0 containers: []
	W0414 13:43:52.450409 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:43:52.450424 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:43:52.450490 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:43:52.489893 1223410 cri.go:89] found id: ""
	I0414 13:43:52.489933 1223410 logs.go:282] 0 containers: []
	W0414 13:43:52.489944 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:43:52.489950 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:43:52.490015 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:43:52.526632 1223410 cri.go:89] found id: ""
	I0414 13:43:52.526673 1223410 logs.go:282] 0 containers: []
	W0414 13:43:52.526687 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:43:52.526695 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:43:52.526770 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:43:52.566888 1223410 cri.go:89] found id: ""
	I0414 13:43:52.566922 1223410 logs.go:282] 0 containers: []
	W0414 13:43:52.566933 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:43:52.566942 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:43:52.567011 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:43:52.602484 1223410 cri.go:89] found id: ""
	I0414 13:43:52.602517 1223410 logs.go:282] 0 containers: []
	W0414 13:43:52.602526 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:43:52.602532 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:43:52.602600 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:43:52.639325 1223410 cri.go:89] found id: ""
	I0414 13:43:52.639539 1223410 logs.go:282] 0 containers: []
	W0414 13:43:52.639563 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:43:52.639576 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:43:52.639682 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:43:52.681170 1223410 cri.go:89] found id: ""
	I0414 13:43:52.681206 1223410 logs.go:282] 0 containers: []
	W0414 13:43:52.681223 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:43:52.681230 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:43:52.681296 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:43:52.727617 1223410 cri.go:89] found id: ""
	I0414 13:43:52.727681 1223410 logs.go:282] 0 containers: []
	W0414 13:43:52.727695 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:43:52.727710 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:43:52.727726 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:43:52.831719 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:43:52.831767 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:43:52.880210 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:43:52.880253 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:43:52.933911 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:43:52.933962 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:43:52.949250 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:43:52.949308 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:43:53.037910 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:43:55.538091 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:55.551784 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:43:55.551851 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:43:55.592910 1223410 cri.go:89] found id: ""
	I0414 13:43:55.592943 1223410 logs.go:282] 0 containers: []
	W0414 13:43:55.592951 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:43:55.592958 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:43:55.593014 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:43:55.632501 1223410 cri.go:89] found id: ""
	I0414 13:43:55.632551 1223410 logs.go:282] 0 containers: []
	W0414 13:43:55.632564 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:43:55.632573 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:43:55.632642 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:43:55.673888 1223410 cri.go:89] found id: ""
	I0414 13:43:55.673921 1223410 logs.go:282] 0 containers: []
	W0414 13:43:55.673933 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:43:55.673942 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:43:55.674011 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:43:55.714082 1223410 cri.go:89] found id: ""
	I0414 13:43:55.714124 1223410 logs.go:282] 0 containers: []
	W0414 13:43:55.714136 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:43:55.714144 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:43:55.714212 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:43:55.751498 1223410 cri.go:89] found id: ""
	I0414 13:43:55.751532 1223410 logs.go:282] 0 containers: []
	W0414 13:43:55.751540 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:43:55.751546 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:43:55.751607 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:43:55.792579 1223410 cri.go:89] found id: ""
	I0414 13:43:55.792613 1223410 logs.go:282] 0 containers: []
	W0414 13:43:55.792625 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:43:55.792633 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:43:55.792698 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:43:55.832436 1223410 cri.go:89] found id: ""
	I0414 13:43:55.832468 1223410 logs.go:282] 0 containers: []
	W0414 13:43:55.832477 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:43:55.832484 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:43:55.832540 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:43:55.876231 1223410 cri.go:89] found id: ""
	I0414 13:43:55.876265 1223410 logs.go:282] 0 containers: []
	W0414 13:43:55.876274 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:43:55.876288 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:43:55.876300 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:43:55.960008 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:43:55.960061 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:43:56.007550 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:43:56.007591 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:43:56.060475 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:43:56.060526 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:43:56.075446 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:43:56.075488 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:43:56.155450 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:43:58.655841 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:43:58.673285 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:43:58.673370 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:43:58.735519 1223410 cri.go:89] found id: ""
	I0414 13:43:58.735551 1223410 logs.go:282] 0 containers: []
	W0414 13:43:58.735563 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:43:58.735577 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:43:58.735646 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:43:58.802704 1223410 cri.go:89] found id: ""
	I0414 13:43:58.802747 1223410 logs.go:282] 0 containers: []
	W0414 13:43:58.802761 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:43:58.802771 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:43:58.802841 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:43:58.842268 1223410 cri.go:89] found id: ""
	I0414 13:43:58.842301 1223410 logs.go:282] 0 containers: []
	W0414 13:43:58.842311 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:43:58.842318 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:43:58.842383 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:43:58.882657 1223410 cri.go:89] found id: ""
	I0414 13:43:58.882692 1223410 logs.go:282] 0 containers: []
	W0414 13:43:58.882703 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:43:58.882710 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:43:58.882790 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:43:58.920083 1223410 cri.go:89] found id: ""
	I0414 13:43:58.920110 1223410 logs.go:282] 0 containers: []
	W0414 13:43:58.920118 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:43:58.920125 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:43:58.920180 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:43:58.958714 1223410 cri.go:89] found id: ""
	I0414 13:43:58.958762 1223410 logs.go:282] 0 containers: []
	W0414 13:43:58.958779 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:43:58.958789 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:43:58.958880 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:43:58.996877 1223410 cri.go:89] found id: ""
	I0414 13:43:58.996909 1223410 logs.go:282] 0 containers: []
	W0414 13:43:58.996918 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:43:58.996924 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:43:58.996981 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:43:59.036516 1223410 cri.go:89] found id: ""
	I0414 13:43:59.036551 1223410 logs.go:282] 0 containers: []
	W0414 13:43:59.036561 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:43:59.036575 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:43:59.036592 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:43:59.090868 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:43:59.090918 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:43:59.107851 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:43:59.107887 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:43:59.185157 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:43:59.185188 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:43:59.185206 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:43:59.266429 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:43:59.266482 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:01.811847 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:01.826194 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:01.826273 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:01.863548 1223410 cri.go:89] found id: ""
	I0414 13:44:01.863576 1223410 logs.go:282] 0 containers: []
	W0414 13:44:01.863585 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:01.863591 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:01.863647 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:01.903969 1223410 cri.go:89] found id: ""
	I0414 13:44:01.904007 1223410 logs.go:282] 0 containers: []
	W0414 13:44:01.904019 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:01.904034 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:01.904110 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:01.945122 1223410 cri.go:89] found id: ""
	I0414 13:44:01.945154 1223410 logs.go:282] 0 containers: []
	W0414 13:44:01.945164 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:01.945172 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:01.945337 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:01.986675 1223410 cri.go:89] found id: ""
	I0414 13:44:01.986708 1223410 logs.go:282] 0 containers: []
	W0414 13:44:01.986717 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:01.986724 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:01.986792 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:02.028273 1223410 cri.go:89] found id: ""
	I0414 13:44:02.028309 1223410 logs.go:282] 0 containers: []
	W0414 13:44:02.028317 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:02.028324 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:02.028403 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:02.069510 1223410 cri.go:89] found id: ""
	I0414 13:44:02.069547 1223410 logs.go:282] 0 containers: []
	W0414 13:44:02.069558 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:02.069565 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:02.069638 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:02.111233 1223410 cri.go:89] found id: ""
	I0414 13:44:02.111281 1223410 logs.go:282] 0 containers: []
	W0414 13:44:02.111293 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:02.111319 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:02.111445 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:02.161767 1223410 cri.go:89] found id: ""
	I0414 13:44:02.161803 1223410 logs.go:282] 0 containers: []
	W0414 13:44:02.161814 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:02.161826 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:02.161838 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:02.177658 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:02.177693 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:02.258424 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:02.258447 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:02.258460 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:02.343205 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:02.343259 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:02.391062 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:02.391098 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:04.949302 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:04.964421 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:04.964492 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:05.005349 1223410 cri.go:89] found id: ""
	I0414 13:44:05.005385 1223410 logs.go:282] 0 containers: []
	W0414 13:44:05.005395 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:05.005401 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:05.005466 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:05.046177 1223410 cri.go:89] found id: ""
	I0414 13:44:05.046215 1223410 logs.go:282] 0 containers: []
	W0414 13:44:05.046228 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:05.046238 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:05.046303 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:05.090807 1223410 cri.go:89] found id: ""
	I0414 13:44:05.090846 1223410 logs.go:282] 0 containers: []
	W0414 13:44:05.090858 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:05.090867 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:05.090924 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:05.129466 1223410 cri.go:89] found id: ""
	I0414 13:44:05.129497 1223410 logs.go:282] 0 containers: []
	W0414 13:44:05.129506 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:05.129512 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:05.129575 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:05.170890 1223410 cri.go:89] found id: ""
	I0414 13:44:05.170938 1223410 logs.go:282] 0 containers: []
	W0414 13:44:05.170951 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:05.170960 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:05.171055 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:05.209138 1223410 cri.go:89] found id: ""
	I0414 13:44:05.209184 1223410 logs.go:282] 0 containers: []
	W0414 13:44:05.209197 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:05.209208 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:05.209317 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:05.248432 1223410 cri.go:89] found id: ""
	I0414 13:44:05.248476 1223410 logs.go:282] 0 containers: []
	W0414 13:44:05.248488 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:05.248497 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:05.248580 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:05.289184 1223410 cri.go:89] found id: ""
	I0414 13:44:05.289236 1223410 logs.go:282] 0 containers: []
	W0414 13:44:05.289249 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:05.289264 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:05.289282 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:05.307142 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:05.307202 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:05.386059 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:05.386093 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:05.386114 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:05.473465 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:05.473589 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:05.518911 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:05.518944 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:08.072247 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:08.086607 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:08.086681 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:08.123688 1223410 cri.go:89] found id: ""
	I0414 13:44:08.123724 1223410 logs.go:282] 0 containers: []
	W0414 13:44:08.123736 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:08.123743 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:08.123812 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:08.162243 1223410 cri.go:89] found id: ""
	I0414 13:44:08.162305 1223410 logs.go:282] 0 containers: []
	W0414 13:44:08.162323 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:08.162331 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:08.162429 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:08.207332 1223410 cri.go:89] found id: ""
	I0414 13:44:08.207370 1223410 logs.go:282] 0 containers: []
	W0414 13:44:08.207395 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:08.207404 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:08.207478 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:08.247560 1223410 cri.go:89] found id: ""
	I0414 13:44:08.247593 1223410 logs.go:282] 0 containers: []
	W0414 13:44:08.247603 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:08.247609 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:08.247687 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:08.290257 1223410 cri.go:89] found id: ""
	I0414 13:44:08.290303 1223410 logs.go:282] 0 containers: []
	W0414 13:44:08.290315 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:08.290422 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:08.290517 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:08.336077 1223410 cri.go:89] found id: ""
	I0414 13:44:08.336111 1223410 logs.go:282] 0 containers: []
	W0414 13:44:08.336123 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:08.336131 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:08.336200 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:08.379500 1223410 cri.go:89] found id: ""
	I0414 13:44:08.379527 1223410 logs.go:282] 0 containers: []
	W0414 13:44:08.379536 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:08.379542 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:08.379592 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:08.417573 1223410 cri.go:89] found id: ""
	I0414 13:44:08.417605 1223410 logs.go:282] 0 containers: []
	W0414 13:44:08.417617 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:08.417651 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:08.417667 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:08.500184 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:08.500235 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:08.540905 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:08.540943 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:08.593994 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:08.594039 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:08.610547 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:08.610591 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:08.688693 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:11.190194 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:11.206441 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:11.206522 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:11.245799 1223410 cri.go:89] found id: ""
	I0414 13:44:11.245843 1223410 logs.go:282] 0 containers: []
	W0414 13:44:11.245856 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:11.245865 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:11.245934 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:11.284664 1223410 cri.go:89] found id: ""
	I0414 13:44:11.284708 1223410 logs.go:282] 0 containers: []
	W0414 13:44:11.284723 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:11.284731 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:11.284794 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:11.324779 1223410 cri.go:89] found id: ""
	I0414 13:44:11.324812 1223410 logs.go:282] 0 containers: []
	W0414 13:44:11.324821 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:11.324857 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:11.324934 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:11.365463 1223410 cri.go:89] found id: ""
	I0414 13:44:11.365500 1223410 logs.go:282] 0 containers: []
	W0414 13:44:11.365512 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:11.365521 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:11.365586 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:11.401316 1223410 cri.go:89] found id: ""
	I0414 13:44:11.401351 1223410 logs.go:282] 0 containers: []
	W0414 13:44:11.401363 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:11.401371 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:11.401436 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:11.442055 1223410 cri.go:89] found id: ""
	I0414 13:44:11.442086 1223410 logs.go:282] 0 containers: []
	W0414 13:44:11.442095 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:11.442101 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:11.442170 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:11.479894 1223410 cri.go:89] found id: ""
	I0414 13:44:11.479929 1223410 logs.go:282] 0 containers: []
	W0414 13:44:11.479942 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:11.479950 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:11.480020 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:11.519190 1223410 cri.go:89] found id: ""
	I0414 13:44:11.519219 1223410 logs.go:282] 0 containers: []
	W0414 13:44:11.519229 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:11.519242 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:11.519260 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:11.571551 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:11.571599 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:11.588652 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:11.588689 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:11.661979 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:11.662010 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:11.662028 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:11.744887 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:11.744951 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:14.287063 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:14.304266 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:14.304357 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:14.342836 1223410 cri.go:89] found id: ""
	I0414 13:44:14.342866 1223410 logs.go:282] 0 containers: []
	W0414 13:44:14.342874 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:14.342880 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:14.342941 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:14.381491 1223410 cri.go:89] found id: ""
	I0414 13:44:14.381526 1223410 logs.go:282] 0 containers: []
	W0414 13:44:14.381535 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:14.381542 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:14.381616 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:14.419579 1223410 cri.go:89] found id: ""
	I0414 13:44:14.419630 1223410 logs.go:282] 0 containers: []
	W0414 13:44:14.419640 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:14.419647 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:14.419732 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:14.460999 1223410 cri.go:89] found id: ""
	I0414 13:44:14.461034 1223410 logs.go:282] 0 containers: []
	W0414 13:44:14.461042 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:14.461048 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:14.461127 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:14.502419 1223410 cri.go:89] found id: ""
	I0414 13:44:14.502452 1223410 logs.go:282] 0 containers: []
	W0414 13:44:14.502461 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:14.502468 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:14.502540 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:14.541646 1223410 cri.go:89] found id: ""
	I0414 13:44:14.541687 1223410 logs.go:282] 0 containers: []
	W0414 13:44:14.541699 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:14.541708 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:14.541765 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:14.579039 1223410 cri.go:89] found id: ""
	I0414 13:44:14.579069 1223410 logs.go:282] 0 containers: []
	W0414 13:44:14.579078 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:14.579084 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:14.579138 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:14.617532 1223410 cri.go:89] found id: ""
	I0414 13:44:14.617566 1223410 logs.go:282] 0 containers: []
	W0414 13:44:14.617576 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:14.617590 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:14.617605 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:14.632093 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:14.632131 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:14.710444 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:14.710478 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:14.710492 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:14.789664 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:14.789711 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:14.836231 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:14.836290 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:17.392347 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:17.406691 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:17.406775 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:17.449579 1223410 cri.go:89] found id: ""
	I0414 13:44:17.449607 1223410 logs.go:282] 0 containers: []
	W0414 13:44:17.449618 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:17.449626 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:17.449705 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:17.485084 1223410 cri.go:89] found id: ""
	I0414 13:44:17.485116 1223410 logs.go:282] 0 containers: []
	W0414 13:44:17.485124 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:17.485130 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:17.485192 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:17.524649 1223410 cri.go:89] found id: ""
	I0414 13:44:17.524685 1223410 logs.go:282] 0 containers: []
	W0414 13:44:17.524693 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:17.524700 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:17.524759 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:17.565078 1223410 cri.go:89] found id: ""
	I0414 13:44:17.565112 1223410 logs.go:282] 0 containers: []
	W0414 13:44:17.565122 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:17.565129 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:17.565192 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:17.606699 1223410 cri.go:89] found id: ""
	I0414 13:44:17.606735 1223410 logs.go:282] 0 containers: []
	W0414 13:44:17.606743 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:17.606750 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:17.606807 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:17.645358 1223410 cri.go:89] found id: ""
	I0414 13:44:17.645391 1223410 logs.go:282] 0 containers: []
	W0414 13:44:17.645400 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:17.645407 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:17.645466 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:17.684467 1223410 cri.go:89] found id: ""
	I0414 13:44:17.684498 1223410 logs.go:282] 0 containers: []
	W0414 13:44:17.684511 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:17.684517 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:17.684580 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:17.725957 1223410 cri.go:89] found id: ""
	I0414 13:44:17.726067 1223410 logs.go:282] 0 containers: []
	W0414 13:44:17.726086 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:17.726098 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:17.726114 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:17.778871 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:17.778924 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:17.794292 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:17.794338 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:17.875975 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:17.876002 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:17.876023 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:17.952449 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:17.952498 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:20.495733 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:20.511289 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:20.511380 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:20.549735 1223410 cri.go:89] found id: ""
	I0414 13:44:20.549765 1223410 logs.go:282] 0 containers: []
	W0414 13:44:20.549774 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:20.549780 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:20.549833 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:20.590828 1223410 cri.go:89] found id: ""
	I0414 13:44:20.590863 1223410 logs.go:282] 0 containers: []
	W0414 13:44:20.590872 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:20.590878 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:20.590939 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:20.629789 1223410 cri.go:89] found id: ""
	I0414 13:44:20.629820 1223410 logs.go:282] 0 containers: []
	W0414 13:44:20.629829 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:20.629842 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:20.629900 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:20.665159 1223410 cri.go:89] found id: ""
	I0414 13:44:20.665208 1223410 logs.go:282] 0 containers: []
	W0414 13:44:20.665222 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:20.665230 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:20.665298 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:20.705501 1223410 cri.go:89] found id: ""
	I0414 13:44:20.705542 1223410 logs.go:282] 0 containers: []
	W0414 13:44:20.705554 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:20.705563 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:20.705632 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:20.747210 1223410 cri.go:89] found id: ""
	I0414 13:44:20.747253 1223410 logs.go:282] 0 containers: []
	W0414 13:44:20.747266 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:20.747274 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:20.747346 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:20.785683 1223410 cri.go:89] found id: ""
	I0414 13:44:20.785714 1223410 logs.go:282] 0 containers: []
	W0414 13:44:20.785725 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:20.785733 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:20.785799 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:20.830018 1223410 cri.go:89] found id: ""
	I0414 13:44:20.830059 1223410 logs.go:282] 0 containers: []
	W0414 13:44:20.830070 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:20.830083 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:20.830098 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:20.920249 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:20.920301 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:20.968887 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:20.968928 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:21.023246 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:21.023303 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:21.039345 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:21.039379 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:21.138726 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:23.640524 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:23.654061 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:23.654146 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:23.695001 1223410 cri.go:89] found id: ""
	I0414 13:44:23.695034 1223410 logs.go:282] 0 containers: []
	W0414 13:44:23.695043 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:23.695049 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:23.695120 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:23.737557 1223410 cri.go:89] found id: ""
	I0414 13:44:23.737665 1223410 logs.go:282] 0 containers: []
	W0414 13:44:23.737695 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:23.737769 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:23.737847 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:23.780868 1223410 cri.go:89] found id: ""
	I0414 13:44:23.780911 1223410 logs.go:282] 0 containers: []
	W0414 13:44:23.780924 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:23.780945 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:23.781018 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:23.823002 1223410 cri.go:89] found id: ""
	I0414 13:44:23.823032 1223410 logs.go:282] 0 containers: []
	W0414 13:44:23.823041 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:23.823048 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:23.823102 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:23.864566 1223410 cri.go:89] found id: ""
	I0414 13:44:23.864602 1223410 logs.go:282] 0 containers: []
	W0414 13:44:23.864613 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:23.864626 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:23.864700 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:23.903530 1223410 cri.go:89] found id: ""
	I0414 13:44:23.903576 1223410 logs.go:282] 0 containers: []
	W0414 13:44:23.903589 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:23.903601 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:23.903711 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:23.942349 1223410 cri.go:89] found id: ""
	I0414 13:44:23.942391 1223410 logs.go:282] 0 containers: []
	W0414 13:44:23.942403 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:23.942411 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:23.942495 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:23.981174 1223410 cri.go:89] found id: ""
	I0414 13:44:23.981215 1223410 logs.go:282] 0 containers: []
	W0414 13:44:23.981227 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:23.981242 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:23.981259 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:23.996557 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:23.996603 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:24.082725 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:24.082759 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:24.082776 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:24.164708 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:24.164751 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:24.207982 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:24.208015 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:26.759810 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:26.774802 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:26.774892 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:26.846474 1223410 cri.go:89] found id: ""
	I0414 13:44:26.846515 1223410 logs.go:282] 0 containers: []
	W0414 13:44:26.846526 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:26.846533 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:26.846588 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:26.883419 1223410 cri.go:89] found id: ""
	I0414 13:44:26.883447 1223410 logs.go:282] 0 containers: []
	W0414 13:44:26.883455 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:26.883462 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:26.883525 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:26.926022 1223410 cri.go:89] found id: ""
	I0414 13:44:26.926053 1223410 logs.go:282] 0 containers: []
	W0414 13:44:26.926062 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:26.926069 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:26.926140 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:26.967522 1223410 cri.go:89] found id: ""
	I0414 13:44:26.967557 1223410 logs.go:282] 0 containers: []
	W0414 13:44:26.967567 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:26.967573 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:26.967634 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:27.007360 1223410 cri.go:89] found id: ""
	I0414 13:44:27.007401 1223410 logs.go:282] 0 containers: []
	W0414 13:44:27.007413 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:27.007422 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:27.007494 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:27.047567 1223410 cri.go:89] found id: ""
	I0414 13:44:27.047600 1223410 logs.go:282] 0 containers: []
	W0414 13:44:27.047609 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:27.047616 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:27.047712 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:27.088149 1223410 cri.go:89] found id: ""
	I0414 13:44:27.088187 1223410 logs.go:282] 0 containers: []
	W0414 13:44:27.088196 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:27.088203 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:27.088261 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:27.129765 1223410 cri.go:89] found id: ""
	I0414 13:44:27.129802 1223410 logs.go:282] 0 containers: []
	W0414 13:44:27.129812 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:27.129826 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:27.129843 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:27.183437 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:27.183490 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:27.201350 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:27.201401 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:27.275452 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:27.275476 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:27.275493 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:27.360970 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:27.361031 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:29.909049 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:29.923699 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:29.923788 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:29.961545 1223410 cri.go:89] found id: ""
	I0414 13:44:29.961582 1223410 logs.go:282] 0 containers: []
	W0414 13:44:29.961595 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:29.961604 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:29.961677 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:30.003510 1223410 cri.go:89] found id: ""
	I0414 13:44:30.003547 1223410 logs.go:282] 0 containers: []
	W0414 13:44:30.003556 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:30.003562 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:30.003624 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:30.044280 1223410 cri.go:89] found id: ""
	I0414 13:44:30.044320 1223410 logs.go:282] 0 containers: []
	W0414 13:44:30.044331 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:30.044339 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:30.044402 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:30.086246 1223410 cri.go:89] found id: ""
	I0414 13:44:30.086282 1223410 logs.go:282] 0 containers: []
	W0414 13:44:30.086296 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:30.086303 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:30.086374 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:30.130940 1223410 cri.go:89] found id: ""
	I0414 13:44:30.130979 1223410 logs.go:282] 0 containers: []
	W0414 13:44:30.130993 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:30.131003 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:30.131080 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:30.170137 1223410 cri.go:89] found id: ""
	I0414 13:44:30.170176 1223410 logs.go:282] 0 containers: []
	W0414 13:44:30.170188 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:30.170196 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:30.170349 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:30.205909 1223410 cri.go:89] found id: ""
	I0414 13:44:30.205947 1223410 logs.go:282] 0 containers: []
	W0414 13:44:30.205960 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:30.205969 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:30.206038 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:30.242996 1223410 cri.go:89] found id: ""
	I0414 13:44:30.243034 1223410 logs.go:282] 0 containers: []
	W0414 13:44:30.243046 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:30.243060 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:30.243075 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:30.257847 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:30.257883 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:30.337081 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:30.337108 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:30.337123 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:30.414737 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:30.414788 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:30.463768 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:30.463811 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:33.020485 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:33.034675 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:33.034777 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:33.074447 1223410 cri.go:89] found id: ""
	I0414 13:44:33.074479 1223410 logs.go:282] 0 containers: []
	W0414 13:44:33.074489 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:33.074498 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:33.074578 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:33.118518 1223410 cri.go:89] found id: ""
	I0414 13:44:33.118560 1223410 logs.go:282] 0 containers: []
	W0414 13:44:33.118572 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:33.118579 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:33.118653 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:33.161282 1223410 cri.go:89] found id: ""
	I0414 13:44:33.161312 1223410 logs.go:282] 0 containers: []
	W0414 13:44:33.161324 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:33.161332 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:33.161398 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:33.204333 1223410 cri.go:89] found id: ""
	I0414 13:44:33.204369 1223410 logs.go:282] 0 containers: []
	W0414 13:44:33.204381 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:33.204389 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:33.204467 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:33.255471 1223410 cri.go:89] found id: ""
	I0414 13:44:33.255511 1223410 logs.go:282] 0 containers: []
	W0414 13:44:33.255523 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:33.255531 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:33.255606 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:33.323762 1223410 cri.go:89] found id: ""
	I0414 13:44:33.323796 1223410 logs.go:282] 0 containers: []
	W0414 13:44:33.323808 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:33.323816 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:33.323889 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:33.363900 1223410 cri.go:89] found id: ""
	I0414 13:44:33.363933 1223410 logs.go:282] 0 containers: []
	W0414 13:44:33.363941 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:33.363947 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:33.364010 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:33.399299 1223410 cri.go:89] found id: ""
	I0414 13:44:33.399331 1223410 logs.go:282] 0 containers: []
	W0414 13:44:33.399342 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:33.399355 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:33.399370 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:33.480753 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:33.480802 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:33.523906 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:33.523936 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:33.576751 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:33.576807 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:33.593035 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:33.593070 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:33.668556 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:36.169932 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:36.184381 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:36.184468 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:36.226686 1223410 cri.go:89] found id: ""
	I0414 13:44:36.226729 1223410 logs.go:282] 0 containers: []
	W0414 13:44:36.226752 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:36.226758 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:36.226842 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:36.266758 1223410 cri.go:89] found id: ""
	I0414 13:44:36.266789 1223410 logs.go:282] 0 containers: []
	W0414 13:44:36.266797 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:36.266803 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:36.266856 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:36.306938 1223410 cri.go:89] found id: ""
	I0414 13:44:36.306970 1223410 logs.go:282] 0 containers: []
	W0414 13:44:36.306988 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:36.306995 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:36.307053 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:36.346761 1223410 cri.go:89] found id: ""
	I0414 13:44:36.346814 1223410 logs.go:282] 0 containers: []
	W0414 13:44:36.346826 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:36.346837 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:36.346899 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:36.385848 1223410 cri.go:89] found id: ""
	I0414 13:44:36.385890 1223410 logs.go:282] 0 containers: []
	W0414 13:44:36.385898 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:36.385904 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:36.385972 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:36.422298 1223410 cri.go:89] found id: ""
	I0414 13:44:36.422340 1223410 logs.go:282] 0 containers: []
	W0414 13:44:36.422348 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:36.422356 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:36.422427 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:36.464405 1223410 cri.go:89] found id: ""
	I0414 13:44:36.464446 1223410 logs.go:282] 0 containers: []
	W0414 13:44:36.464454 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:36.464461 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:36.464515 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:36.504707 1223410 cri.go:89] found id: ""
	I0414 13:44:36.504746 1223410 logs.go:282] 0 containers: []
	W0414 13:44:36.504759 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:36.504774 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:36.504790 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:36.547012 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:36.547053 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:36.608182 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:36.608243 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:36.625752 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:36.625790 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:36.704041 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:36.704077 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:36.704095 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:39.293312 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:39.310086 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:39.310172 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:39.350415 1223410 cri.go:89] found id: ""
	I0414 13:44:39.350455 1223410 logs.go:282] 0 containers: []
	W0414 13:44:39.350466 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:39.350475 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:39.350550 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:39.387908 1223410 cri.go:89] found id: ""
	I0414 13:44:39.387940 1223410 logs.go:282] 0 containers: []
	W0414 13:44:39.387949 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:39.387955 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:39.388036 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:39.423949 1223410 cri.go:89] found id: ""
	I0414 13:44:39.423976 1223410 logs.go:282] 0 containers: []
	W0414 13:44:39.423985 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:39.423991 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:39.424041 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:39.458319 1223410 cri.go:89] found id: ""
	I0414 13:44:39.458363 1223410 logs.go:282] 0 containers: []
	W0414 13:44:39.458373 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:39.458380 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:39.458449 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:39.494021 1223410 cri.go:89] found id: ""
	I0414 13:44:39.494070 1223410 logs.go:282] 0 containers: []
	W0414 13:44:39.494082 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:39.494089 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:39.494161 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:39.537111 1223410 cri.go:89] found id: ""
	I0414 13:44:39.537147 1223410 logs.go:282] 0 containers: []
	W0414 13:44:39.537159 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:39.537167 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:39.537238 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:39.576249 1223410 cri.go:89] found id: ""
	I0414 13:44:39.576300 1223410 logs.go:282] 0 containers: []
	W0414 13:44:39.576312 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:39.576320 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:39.576390 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:39.615922 1223410 cri.go:89] found id: ""
	I0414 13:44:39.615958 1223410 logs.go:282] 0 containers: []
	W0414 13:44:39.615967 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:39.615981 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:39.615993 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:39.665462 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:39.665509 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:39.679546 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:39.679580 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:39.754059 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:39.754093 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:39.754111 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:39.837226 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:39.837277 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:42.382417 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:42.397141 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:42.397226 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:42.432492 1223410 cri.go:89] found id: ""
	I0414 13:44:42.432534 1223410 logs.go:282] 0 containers: []
	W0414 13:44:42.432543 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:42.432549 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:42.432616 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:42.467316 1223410 cri.go:89] found id: ""
	I0414 13:44:42.467354 1223410 logs.go:282] 0 containers: []
	W0414 13:44:42.467366 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:42.467374 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:42.467449 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:42.505719 1223410 cri.go:89] found id: ""
	I0414 13:44:42.505760 1223410 logs.go:282] 0 containers: []
	W0414 13:44:42.505780 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:42.505790 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:42.505855 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:42.544630 1223410 cri.go:89] found id: ""
	I0414 13:44:42.544665 1223410 logs.go:282] 0 containers: []
	W0414 13:44:42.544676 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:42.544684 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:42.544759 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:42.585601 1223410 cri.go:89] found id: ""
	I0414 13:44:42.585636 1223410 logs.go:282] 0 containers: []
	W0414 13:44:42.585648 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:42.585656 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:42.585731 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:42.624806 1223410 cri.go:89] found id: ""
	I0414 13:44:42.624848 1223410 logs.go:282] 0 containers: []
	W0414 13:44:42.624860 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:42.624869 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:42.624938 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:42.659999 1223410 cri.go:89] found id: ""
	I0414 13:44:42.660033 1223410 logs.go:282] 0 containers: []
	W0414 13:44:42.660043 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:42.660050 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:42.660104 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:42.697675 1223410 cri.go:89] found id: ""
	I0414 13:44:42.697703 1223410 logs.go:282] 0 containers: []
	W0414 13:44:42.697711 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:42.697724 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:42.697736 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:42.754868 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:42.754921 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:42.771134 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:42.771170 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:42.851511 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:42.851536 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:42.851548 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:42.936583 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:42.936710 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:45.486684 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:45.503020 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:45.503119 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:45.540956 1223410 cri.go:89] found id: ""
	I0414 13:44:45.540985 1223410 logs.go:282] 0 containers: []
	W0414 13:44:45.540994 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:45.541000 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:45.541060 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:45.581397 1223410 cri.go:89] found id: ""
	I0414 13:44:45.581437 1223410 logs.go:282] 0 containers: []
	W0414 13:44:45.581445 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:45.581451 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:45.581506 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:45.618733 1223410 cri.go:89] found id: ""
	I0414 13:44:45.618771 1223410 logs.go:282] 0 containers: []
	W0414 13:44:45.618783 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:45.618791 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:45.618855 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:45.654482 1223410 cri.go:89] found id: ""
	I0414 13:44:45.654513 1223410 logs.go:282] 0 containers: []
	W0414 13:44:45.654522 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:45.654529 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:45.654593 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:45.691519 1223410 cri.go:89] found id: ""
	I0414 13:44:45.691556 1223410 logs.go:282] 0 containers: []
	W0414 13:44:45.691569 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:45.691577 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:45.691690 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:45.728822 1223410 cri.go:89] found id: ""
	I0414 13:44:45.728860 1223410 logs.go:282] 0 containers: []
	W0414 13:44:45.728873 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:45.728882 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:45.728951 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:45.768124 1223410 cri.go:89] found id: ""
	I0414 13:44:45.768159 1223410 logs.go:282] 0 containers: []
	W0414 13:44:45.768171 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:45.768179 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:45.768256 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:45.805269 1223410 cri.go:89] found id: ""
	I0414 13:44:45.805307 1223410 logs.go:282] 0 containers: []
	W0414 13:44:45.805318 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:45.805332 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:45.805352 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:45.855803 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:45.855849 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:45.870172 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:45.870212 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:45.951816 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:45.951892 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:45.951932 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:46.041036 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:46.041087 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:48.586470 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:48.602607 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:48.602674 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:48.647802 1223410 cri.go:89] found id: ""
	I0414 13:44:48.647857 1223410 logs.go:282] 0 containers: []
	W0414 13:44:48.647872 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:48.647882 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:48.647950 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:48.689993 1223410 cri.go:89] found id: ""
	I0414 13:44:48.690028 1223410 logs.go:282] 0 containers: []
	W0414 13:44:48.690040 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:48.690047 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:48.690102 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:48.725952 1223410 cri.go:89] found id: ""
	I0414 13:44:48.725983 1223410 logs.go:282] 0 containers: []
	W0414 13:44:48.725992 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:48.725998 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:48.726074 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:48.766575 1223410 cri.go:89] found id: ""
	I0414 13:44:48.766609 1223410 logs.go:282] 0 containers: []
	W0414 13:44:48.766620 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:48.766679 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:48.766773 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:48.809432 1223410 cri.go:89] found id: ""
	I0414 13:44:48.809465 1223410 logs.go:282] 0 containers: []
	W0414 13:44:48.809477 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:48.809485 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:48.809552 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:48.852182 1223410 cri.go:89] found id: ""
	I0414 13:44:48.852223 1223410 logs.go:282] 0 containers: []
	W0414 13:44:48.852241 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:48.852251 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:48.852343 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:48.891089 1223410 cri.go:89] found id: ""
	I0414 13:44:48.891121 1223410 logs.go:282] 0 containers: []
	W0414 13:44:48.891129 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:48.891135 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:48.891198 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:48.929651 1223410 cri.go:89] found id: ""
	I0414 13:44:48.929688 1223410 logs.go:282] 0 containers: []
	W0414 13:44:48.929704 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:48.929719 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:48.929733 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:48.975328 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:48.975377 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:49.027531 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:49.027581 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:49.043423 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:49.043468 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:49.117362 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:49.117405 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:49.117425 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:51.696944 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:51.712357 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:51.712433 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:51.752108 1223410 cri.go:89] found id: ""
	I0414 13:44:51.752146 1223410 logs.go:282] 0 containers: []
	W0414 13:44:51.752157 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:51.752166 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:51.752244 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:51.789891 1223410 cri.go:89] found id: ""
	I0414 13:44:51.789929 1223410 logs.go:282] 0 containers: []
	W0414 13:44:51.789941 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:51.789949 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:51.790015 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:51.828136 1223410 cri.go:89] found id: ""
	I0414 13:44:51.828168 1223410 logs.go:282] 0 containers: []
	W0414 13:44:51.828176 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:51.828185 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:51.828240 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:51.873656 1223410 cri.go:89] found id: ""
	I0414 13:44:51.873690 1223410 logs.go:282] 0 containers: []
	W0414 13:44:51.873698 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:51.873704 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:51.873763 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:51.914906 1223410 cri.go:89] found id: ""
	I0414 13:44:51.914948 1223410 logs.go:282] 0 containers: []
	W0414 13:44:51.914970 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:51.914990 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:51.915090 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:51.955106 1223410 cri.go:89] found id: ""
	I0414 13:44:51.955136 1223410 logs.go:282] 0 containers: []
	W0414 13:44:51.955144 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:51.955151 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:51.955232 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:51.993525 1223410 cri.go:89] found id: ""
	I0414 13:44:51.993556 1223410 logs.go:282] 0 containers: []
	W0414 13:44:51.993565 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:51.993572 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:51.993626 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:52.034802 1223410 cri.go:89] found id: ""
	I0414 13:44:52.034848 1223410 logs.go:282] 0 containers: []
	W0414 13:44:52.034864 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:52.034877 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:52.034892 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:52.123206 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:52.123263 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:52.179132 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:52.179166 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:52.237500 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:52.237545 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:52.254725 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:52.254761 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:52.343752 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:54.844647 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:54.858440 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:54.858507 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:54.894633 1223410 cri.go:89] found id: ""
	I0414 13:44:54.894664 1223410 logs.go:282] 0 containers: []
	W0414 13:44:54.894675 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:54.894682 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:54.894762 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:54.937461 1223410 cri.go:89] found id: ""
	I0414 13:44:54.937504 1223410 logs.go:282] 0 containers: []
	W0414 13:44:54.937515 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:54.937522 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:54.937591 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:54.980496 1223410 cri.go:89] found id: ""
	I0414 13:44:54.980534 1223410 logs.go:282] 0 containers: []
	W0414 13:44:54.980543 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:54.980549 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:54.980632 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:55.022431 1223410 cri.go:89] found id: ""
	I0414 13:44:55.022470 1223410 logs.go:282] 0 containers: []
	W0414 13:44:55.022482 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:55.022491 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:55.022561 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:55.063911 1223410 cri.go:89] found id: ""
	I0414 13:44:55.063948 1223410 logs.go:282] 0 containers: []
	W0414 13:44:55.063960 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:55.063969 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:55.064077 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:55.102466 1223410 cri.go:89] found id: ""
	I0414 13:44:55.102500 1223410 logs.go:282] 0 containers: []
	W0414 13:44:55.102509 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:55.102516 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:55.102588 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:55.140864 1223410 cri.go:89] found id: ""
	I0414 13:44:55.141025 1223410 logs.go:282] 0 containers: []
	W0414 13:44:55.141046 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:55.141091 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:55.141182 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:55.181169 1223410 cri.go:89] found id: ""
	I0414 13:44:55.181205 1223410 logs.go:282] 0 containers: []
	W0414 13:44:55.181218 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:55.181231 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:55.181246 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:55.231271 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:55.231318 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:55.247548 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:55.247592 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:55.327747 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:55.327779 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:55.327796 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:55.404637 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:55.404686 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:44:57.948687 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:44:57.963476 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:44:57.963556 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:44:58.002980 1223410 cri.go:89] found id: ""
	I0414 13:44:58.003019 1223410 logs.go:282] 0 containers: []
	W0414 13:44:58.003028 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:44:58.003034 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:44:58.003099 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:44:58.047103 1223410 cri.go:89] found id: ""
	I0414 13:44:58.047169 1223410 logs.go:282] 0 containers: []
	W0414 13:44:58.047195 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:44:58.047204 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:44:58.047277 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:44:58.091228 1223410 cri.go:89] found id: ""
	I0414 13:44:58.091268 1223410 logs.go:282] 0 containers: []
	W0414 13:44:58.091281 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:44:58.091288 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:44:58.091393 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:44:58.129417 1223410 cri.go:89] found id: ""
	I0414 13:44:58.129456 1223410 logs.go:282] 0 containers: []
	W0414 13:44:58.129465 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:44:58.129471 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:44:58.129554 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:44:58.167999 1223410 cri.go:89] found id: ""
	I0414 13:44:58.168063 1223410 logs.go:282] 0 containers: []
	W0414 13:44:58.168072 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:44:58.168079 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:44:58.168135 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:44:58.206910 1223410 cri.go:89] found id: ""
	I0414 13:44:58.206948 1223410 logs.go:282] 0 containers: []
	W0414 13:44:58.206957 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:44:58.206965 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:44:58.207027 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:44:58.247029 1223410 cri.go:89] found id: ""
	I0414 13:44:58.247067 1223410 logs.go:282] 0 containers: []
	W0414 13:44:58.247077 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:44:58.247085 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:44:58.247151 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:44:58.284702 1223410 cri.go:89] found id: ""
	I0414 13:44:58.284742 1223410 logs.go:282] 0 containers: []
	W0414 13:44:58.284754 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:44:58.284768 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:44:58.284785 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:44:58.342425 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:44:58.342486 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:44:58.360059 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:44:58.360102 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:44:58.436438 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:44:58.436477 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:44:58.436496 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:44:58.525720 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:44:58.525782 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:01.073387 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:01.087411 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:01.087489 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:01.123936 1223410 cri.go:89] found id: ""
	I0414 13:45:01.123972 1223410 logs.go:282] 0 containers: []
	W0414 13:45:01.123980 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:01.123986 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:01.124043 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:01.161433 1223410 cri.go:89] found id: ""
	I0414 13:45:01.161464 1223410 logs.go:282] 0 containers: []
	W0414 13:45:01.161473 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:01.161479 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:01.161532 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:01.199138 1223410 cri.go:89] found id: ""
	I0414 13:45:01.199172 1223410 logs.go:282] 0 containers: []
	W0414 13:45:01.199181 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:01.199201 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:01.199256 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:01.235361 1223410 cri.go:89] found id: ""
	I0414 13:45:01.235389 1223410 logs.go:282] 0 containers: []
	W0414 13:45:01.235400 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:01.235409 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:01.235465 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:01.271007 1223410 cri.go:89] found id: ""
	I0414 13:45:01.271047 1223410 logs.go:282] 0 containers: []
	W0414 13:45:01.271061 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:01.271069 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:01.271139 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:01.311610 1223410 cri.go:89] found id: ""
	I0414 13:45:01.311669 1223410 logs.go:282] 0 containers: []
	W0414 13:45:01.311684 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:01.311693 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:01.311761 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:01.348868 1223410 cri.go:89] found id: ""
	I0414 13:45:01.348906 1223410 logs.go:282] 0 containers: []
	W0414 13:45:01.348918 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:01.348927 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:01.348999 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:01.384944 1223410 cri.go:89] found id: ""
	I0414 13:45:01.384974 1223410 logs.go:282] 0 containers: []
	W0414 13:45:01.384984 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:01.385000 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:01.385022 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:01.441021 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:01.441089 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:01.457887 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:01.457924 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:01.537329 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:01.537357 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:01.537371 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:01.672108 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:01.672155 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:04.217396 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:04.232481 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:04.232557 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:04.273015 1223410 cri.go:89] found id: ""
	I0414 13:45:04.273060 1223410 logs.go:282] 0 containers: []
	W0414 13:45:04.273070 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:04.273077 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:04.273134 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:04.315581 1223410 cri.go:89] found id: ""
	I0414 13:45:04.315623 1223410 logs.go:282] 0 containers: []
	W0414 13:45:04.315635 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:04.315644 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:04.315745 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:04.358764 1223410 cri.go:89] found id: ""
	I0414 13:45:04.358797 1223410 logs.go:282] 0 containers: []
	W0414 13:45:04.358805 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:04.358812 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:04.358867 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:04.402461 1223410 cri.go:89] found id: ""
	I0414 13:45:04.402504 1223410 logs.go:282] 0 containers: []
	W0414 13:45:04.402516 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:04.402525 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:04.402596 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:04.443972 1223410 cri.go:89] found id: ""
	I0414 13:45:04.444006 1223410 logs.go:282] 0 containers: []
	W0414 13:45:04.444014 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:04.444020 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:04.444094 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:04.484092 1223410 cri.go:89] found id: ""
	I0414 13:45:04.484130 1223410 logs.go:282] 0 containers: []
	W0414 13:45:04.484144 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:04.484152 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:04.484234 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:04.526595 1223410 cri.go:89] found id: ""
	I0414 13:45:04.526643 1223410 logs.go:282] 0 containers: []
	W0414 13:45:04.526655 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:04.526663 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:04.526730 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:04.566878 1223410 cri.go:89] found id: ""
	I0414 13:45:04.566920 1223410 logs.go:282] 0 containers: []
	W0414 13:45:04.566932 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:04.566946 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:04.566963 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:04.622901 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:04.622953 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:04.638332 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:04.638375 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:04.713120 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:04.713165 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:04.713182 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:04.799514 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:04.799558 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:07.342361 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:07.356951 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:07.357054 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:07.398671 1223410 cri.go:89] found id: ""
	I0414 13:45:07.398704 1223410 logs.go:282] 0 containers: []
	W0414 13:45:07.398715 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:07.398723 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:07.398791 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:07.442855 1223410 cri.go:89] found id: ""
	I0414 13:45:07.442894 1223410 logs.go:282] 0 containers: []
	W0414 13:45:07.442903 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:07.442909 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:07.442965 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:07.485328 1223410 cri.go:89] found id: ""
	I0414 13:45:07.485362 1223410 logs.go:282] 0 containers: []
	W0414 13:45:07.485371 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:07.485377 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:07.485440 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:07.529629 1223410 cri.go:89] found id: ""
	I0414 13:45:07.529664 1223410 logs.go:282] 0 containers: []
	W0414 13:45:07.529674 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:07.529680 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:07.529736 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:07.582129 1223410 cri.go:89] found id: ""
	I0414 13:45:07.582162 1223410 logs.go:282] 0 containers: []
	W0414 13:45:07.582171 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:07.582177 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:07.582236 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:07.619946 1223410 cri.go:89] found id: ""
	I0414 13:45:07.619983 1223410 logs.go:282] 0 containers: []
	W0414 13:45:07.619992 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:07.619999 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:07.620067 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:07.656906 1223410 cri.go:89] found id: ""
	I0414 13:45:07.656940 1223410 logs.go:282] 0 containers: []
	W0414 13:45:07.656951 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:07.656959 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:07.657027 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:07.693820 1223410 cri.go:89] found id: ""
	I0414 13:45:07.693862 1223410 logs.go:282] 0 containers: []
	W0414 13:45:07.693871 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:07.693882 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:07.693951 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:07.751048 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:07.751106 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:07.767988 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:07.768026 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:07.849647 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:07.849677 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:07.849701 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:07.932000 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:07.932054 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:10.476428 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:10.492253 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:10.492343 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:10.532447 1223410 cri.go:89] found id: ""
	I0414 13:45:10.532480 1223410 logs.go:282] 0 containers: []
	W0414 13:45:10.532491 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:10.532502 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:10.532607 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:10.570598 1223410 cri.go:89] found id: ""
	I0414 13:45:10.570634 1223410 logs.go:282] 0 containers: []
	W0414 13:45:10.570647 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:10.570655 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:10.570738 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:10.619833 1223410 cri.go:89] found id: ""
	I0414 13:45:10.619864 1223410 logs.go:282] 0 containers: []
	W0414 13:45:10.619874 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:10.619883 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:10.619948 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:10.657955 1223410 cri.go:89] found id: ""
	I0414 13:45:10.657991 1223410 logs.go:282] 0 containers: []
	W0414 13:45:10.658005 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:10.658013 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:10.658085 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:10.697061 1223410 cri.go:89] found id: ""
	I0414 13:45:10.697096 1223410 logs.go:282] 0 containers: []
	W0414 13:45:10.697105 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:10.697111 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:10.697211 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:10.741237 1223410 cri.go:89] found id: ""
	I0414 13:45:10.741269 1223410 logs.go:282] 0 containers: []
	W0414 13:45:10.741280 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:10.741286 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:10.741394 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:10.777836 1223410 cri.go:89] found id: ""
	I0414 13:45:10.777867 1223410 logs.go:282] 0 containers: []
	W0414 13:45:10.777876 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:10.777883 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:10.778031 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:10.819133 1223410 cri.go:89] found id: ""
	I0414 13:45:10.819169 1223410 logs.go:282] 0 containers: []
	W0414 13:45:10.819183 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:10.819196 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:10.819210 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:10.872225 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:10.872321 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:10.889242 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:10.889305 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:10.964836 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:10.964868 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:10.964886 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:11.046314 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:11.046377 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:13.593492 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:13.608672 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:13.608775 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:13.645512 1223410 cri.go:89] found id: ""
	I0414 13:45:13.645544 1223410 logs.go:282] 0 containers: []
	W0414 13:45:13.645553 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:13.645559 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:13.645614 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:13.683898 1223410 cri.go:89] found id: ""
	I0414 13:45:13.683937 1223410 logs.go:282] 0 containers: []
	W0414 13:45:13.683956 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:13.683963 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:13.684043 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:13.724684 1223410 cri.go:89] found id: ""
	I0414 13:45:13.724741 1223410 logs.go:282] 0 containers: []
	W0414 13:45:13.724753 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:13.724762 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:13.724832 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:13.761371 1223410 cri.go:89] found id: ""
	I0414 13:45:13.761415 1223410 logs.go:282] 0 containers: []
	W0414 13:45:13.761429 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:13.761439 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:13.761501 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:13.799063 1223410 cri.go:89] found id: ""
	I0414 13:45:13.799093 1223410 logs.go:282] 0 containers: []
	W0414 13:45:13.799100 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:13.799108 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:13.799172 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:13.838858 1223410 cri.go:89] found id: ""
	I0414 13:45:13.838898 1223410 logs.go:282] 0 containers: []
	W0414 13:45:13.838909 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:13.838915 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:13.839016 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:13.878116 1223410 cri.go:89] found id: ""
	I0414 13:45:13.878152 1223410 logs.go:282] 0 containers: []
	W0414 13:45:13.878163 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:13.878172 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:13.878243 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:13.922055 1223410 cri.go:89] found id: ""
	I0414 13:45:13.922090 1223410 logs.go:282] 0 containers: []
	W0414 13:45:13.922099 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:13.922111 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:13.922123 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:13.964988 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:13.965032 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:14.020593 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:14.020649 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:14.036545 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:14.036587 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:14.114377 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:14.114408 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:14.114422 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:16.694363 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:16.709575 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:16.709650 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:16.751323 1223410 cri.go:89] found id: ""
	I0414 13:45:16.751353 1223410 logs.go:282] 0 containers: []
	W0414 13:45:16.751364 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:16.751374 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:16.751442 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:16.788941 1223410 cri.go:89] found id: ""
	I0414 13:45:16.788982 1223410 logs.go:282] 0 containers: []
	W0414 13:45:16.788995 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:16.789007 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:16.789084 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:16.830038 1223410 cri.go:89] found id: ""
	I0414 13:45:16.830076 1223410 logs.go:282] 0 containers: []
	W0414 13:45:16.830087 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:16.830097 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:16.830167 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:16.869974 1223410 cri.go:89] found id: ""
	I0414 13:45:16.870014 1223410 logs.go:282] 0 containers: []
	W0414 13:45:16.870026 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:16.870034 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:16.870147 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:16.909835 1223410 cri.go:89] found id: ""
	I0414 13:45:16.909866 1223410 logs.go:282] 0 containers: []
	W0414 13:45:16.909874 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:16.909880 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:16.909935 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:16.949530 1223410 cri.go:89] found id: ""
	I0414 13:45:16.949566 1223410 logs.go:282] 0 containers: []
	W0414 13:45:16.949581 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:16.949591 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:16.949666 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:16.986287 1223410 cri.go:89] found id: ""
	I0414 13:45:16.986322 1223410 logs.go:282] 0 containers: []
	W0414 13:45:16.986332 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:16.986338 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:16.986405 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:17.025221 1223410 cri.go:89] found id: ""
	I0414 13:45:17.025256 1223410 logs.go:282] 0 containers: []
	W0414 13:45:17.025268 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:17.025282 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:17.025321 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:17.040057 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:17.040096 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:17.116488 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:17.116520 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:17.116535 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:17.196919 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:17.196964 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:17.239013 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:17.239071 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:19.805537 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:19.821054 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:19.821143 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:19.862467 1223410 cri.go:89] found id: ""
	I0414 13:45:19.862517 1223410 logs.go:282] 0 containers: []
	W0414 13:45:19.862530 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:19.862543 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:19.862614 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:19.902208 1223410 cri.go:89] found id: ""
	I0414 13:45:19.902244 1223410 logs.go:282] 0 containers: []
	W0414 13:45:19.902256 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:19.902263 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:19.902323 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:19.943357 1223410 cri.go:89] found id: ""
	I0414 13:45:19.943392 1223410 logs.go:282] 0 containers: []
	W0414 13:45:19.943404 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:19.943412 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:19.943477 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:19.985542 1223410 cri.go:89] found id: ""
	I0414 13:45:19.985582 1223410 logs.go:282] 0 containers: []
	W0414 13:45:19.985594 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:19.985606 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:19.985672 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:20.040654 1223410 cri.go:89] found id: ""
	I0414 13:45:20.040689 1223410 logs.go:282] 0 containers: []
	W0414 13:45:20.040699 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:20.040707 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:20.040774 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:20.080907 1223410 cri.go:89] found id: ""
	I0414 13:45:20.080949 1223410 logs.go:282] 0 containers: []
	W0414 13:45:20.080962 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:20.080970 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:20.081076 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:20.118663 1223410 cri.go:89] found id: ""
	I0414 13:45:20.118701 1223410 logs.go:282] 0 containers: []
	W0414 13:45:20.118713 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:20.118720 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:20.118786 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:20.166045 1223410 cri.go:89] found id: ""
	I0414 13:45:20.166088 1223410 logs.go:282] 0 containers: []
	W0414 13:45:20.166097 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:20.166108 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:20.166121 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:20.223268 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:20.223322 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:20.237498 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:20.237528 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:20.308049 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:20.308080 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:20.308101 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:20.388351 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:20.388394 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:22.928993 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:22.944710 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:22.944802 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:22.985663 1223410 cri.go:89] found id: ""
	I0414 13:45:22.985697 1223410 logs.go:282] 0 containers: []
	W0414 13:45:22.985706 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:22.985712 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:22.985804 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:23.022116 1223410 cri.go:89] found id: ""
	I0414 13:45:23.022150 1223410 logs.go:282] 0 containers: []
	W0414 13:45:23.022159 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:23.022166 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:23.022232 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:23.063606 1223410 cri.go:89] found id: ""
	I0414 13:45:23.063711 1223410 logs.go:282] 0 containers: []
	W0414 13:45:23.063728 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:23.063739 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:23.063829 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:23.102467 1223410 cri.go:89] found id: ""
	I0414 13:45:23.102508 1223410 logs.go:282] 0 containers: []
	W0414 13:45:23.102518 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:23.102525 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:23.102590 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:23.144254 1223410 cri.go:89] found id: ""
	I0414 13:45:23.144291 1223410 logs.go:282] 0 containers: []
	W0414 13:45:23.144300 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:23.144306 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:23.144365 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:23.182130 1223410 cri.go:89] found id: ""
	I0414 13:45:23.182166 1223410 logs.go:282] 0 containers: []
	W0414 13:45:23.182180 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:23.182188 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:23.182241 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:23.220411 1223410 cri.go:89] found id: ""
	I0414 13:45:23.220447 1223410 logs.go:282] 0 containers: []
	W0414 13:45:23.220456 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:23.220463 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:23.220525 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:23.262211 1223410 cri.go:89] found id: ""
	I0414 13:45:23.262241 1223410 logs.go:282] 0 containers: []
	W0414 13:45:23.262249 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:23.262261 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:23.262274 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:23.304387 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:23.304428 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:23.358552 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:23.358621 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:23.374436 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:23.374478 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:23.448226 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:23.448255 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:23.448273 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:26.037563 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:26.053640 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:26.053736 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:26.093623 1223410 cri.go:89] found id: ""
	I0414 13:45:26.093668 1223410 logs.go:282] 0 containers: []
	W0414 13:45:26.093681 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:26.093689 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:26.093765 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:26.129508 1223410 cri.go:89] found id: ""
	I0414 13:45:26.129553 1223410 logs.go:282] 0 containers: []
	W0414 13:45:26.129565 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:26.129573 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:26.129659 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:26.168131 1223410 cri.go:89] found id: ""
	I0414 13:45:26.168172 1223410 logs.go:282] 0 containers: []
	W0414 13:45:26.168184 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:26.168192 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:26.168257 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:26.203339 1223410 cri.go:89] found id: ""
	I0414 13:45:26.203381 1223410 logs.go:282] 0 containers: []
	W0414 13:45:26.203392 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:26.203400 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:26.203467 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:26.240638 1223410 cri.go:89] found id: ""
	I0414 13:45:26.240676 1223410 logs.go:282] 0 containers: []
	W0414 13:45:26.240684 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:26.240690 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:26.240746 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:26.280256 1223410 cri.go:89] found id: ""
	I0414 13:45:26.280295 1223410 logs.go:282] 0 containers: []
	W0414 13:45:26.280307 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:26.280316 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:26.280383 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:26.316566 1223410 cri.go:89] found id: ""
	I0414 13:45:26.316600 1223410 logs.go:282] 0 containers: []
	W0414 13:45:26.316612 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:26.316620 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:26.316686 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:26.352611 1223410 cri.go:89] found id: ""
	I0414 13:45:26.352649 1223410 logs.go:282] 0 containers: []
	W0414 13:45:26.352663 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:26.352677 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:26.352698 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:26.428669 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:26.428701 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:26.428720 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:26.510761 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:26.510815 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:26.553224 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:26.553269 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:26.607893 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:26.607939 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:29.124394 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:29.140736 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:29.140866 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:29.179923 1223410 cri.go:89] found id: ""
	I0414 13:45:29.179960 1223410 logs.go:282] 0 containers: []
	W0414 13:45:29.179999 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:29.180009 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:29.180088 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:29.216319 1223410 cri.go:89] found id: ""
	I0414 13:45:29.216353 1223410 logs.go:282] 0 containers: []
	W0414 13:45:29.216363 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:29.216368 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:29.216425 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:29.257563 1223410 cri.go:89] found id: ""
	I0414 13:45:29.257604 1223410 logs.go:282] 0 containers: []
	W0414 13:45:29.257613 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:29.257620 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:29.257679 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:29.299394 1223410 cri.go:89] found id: ""
	I0414 13:45:29.299431 1223410 logs.go:282] 0 containers: []
	W0414 13:45:29.299452 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:29.299460 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:29.299540 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:29.339198 1223410 cri.go:89] found id: ""
	I0414 13:45:29.339232 1223410 logs.go:282] 0 containers: []
	W0414 13:45:29.339241 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:29.339247 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:29.339311 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:29.387775 1223410 cri.go:89] found id: ""
	I0414 13:45:29.387814 1223410 logs.go:282] 0 containers: []
	W0414 13:45:29.387826 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:29.387835 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:29.387906 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:29.432554 1223410 cri.go:89] found id: ""
	I0414 13:45:29.432597 1223410 logs.go:282] 0 containers: []
	W0414 13:45:29.432609 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:29.432627 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:29.432697 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:29.474322 1223410 cri.go:89] found id: ""
	I0414 13:45:29.474360 1223410 logs.go:282] 0 containers: []
	W0414 13:45:29.474371 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:29.474385 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:29.474403 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:29.490455 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:29.490498 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:29.571743 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:29.571777 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:29.571796 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:29.651374 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:29.651425 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:29.690789 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:29.690830 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:32.252623 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:32.268880 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:32.268973 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:32.318494 1223410 cri.go:89] found id: ""
	I0414 13:45:32.318531 1223410 logs.go:282] 0 containers: []
	W0414 13:45:32.318544 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:32.318553 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:32.318621 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:32.358760 1223410 cri.go:89] found id: ""
	I0414 13:45:32.358798 1223410 logs.go:282] 0 containers: []
	W0414 13:45:32.358809 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:32.358818 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:32.358890 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:32.409170 1223410 cri.go:89] found id: ""
	I0414 13:45:32.409206 1223410 logs.go:282] 0 containers: []
	W0414 13:45:32.409218 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:32.409225 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:32.409295 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:32.457680 1223410 cri.go:89] found id: ""
	I0414 13:45:32.457720 1223410 logs.go:282] 0 containers: []
	W0414 13:45:32.457734 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:32.457743 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:32.457811 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:32.502797 1223410 cri.go:89] found id: ""
	I0414 13:45:32.502830 1223410 logs.go:282] 0 containers: []
	W0414 13:45:32.502841 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:32.502847 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:32.502919 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:32.539252 1223410 cri.go:89] found id: ""
	I0414 13:45:32.539297 1223410 logs.go:282] 0 containers: []
	W0414 13:45:32.539310 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:32.539317 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:32.539377 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:32.579874 1223410 cri.go:89] found id: ""
	I0414 13:45:32.579907 1223410 logs.go:282] 0 containers: []
	W0414 13:45:32.579918 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:32.579925 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:32.579991 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:32.622831 1223410 cri.go:89] found id: ""
	I0414 13:45:32.622873 1223410 logs.go:282] 0 containers: []
	W0414 13:45:32.622884 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:32.622898 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:32.622913 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:32.675191 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:32.675250 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:32.695096 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:32.695170 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:32.784375 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:32.784403 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:32.784418 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:32.886252 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:32.886304 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:35.438271 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:35.455547 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:35.455638 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:35.513344 1223410 cri.go:89] found id: ""
	I0414 13:45:35.513372 1223410 logs.go:282] 0 containers: []
	W0414 13:45:35.513380 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:35.513386 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:35.513459 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:35.565062 1223410 cri.go:89] found id: ""
	I0414 13:45:35.565097 1223410 logs.go:282] 0 containers: []
	W0414 13:45:35.565108 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:35.565117 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:35.565181 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:35.606681 1223410 cri.go:89] found id: ""
	I0414 13:45:35.606713 1223410 logs.go:282] 0 containers: []
	W0414 13:45:35.606721 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:35.606727 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:35.606800 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:35.652197 1223410 cri.go:89] found id: ""
	I0414 13:45:35.652235 1223410 logs.go:282] 0 containers: []
	W0414 13:45:35.652243 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:35.652249 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:35.652348 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:35.697050 1223410 cri.go:89] found id: ""
	I0414 13:45:35.697092 1223410 logs.go:282] 0 containers: []
	W0414 13:45:35.697102 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:35.697107 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:35.697163 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:35.741414 1223410 cri.go:89] found id: ""
	I0414 13:45:35.741448 1223410 logs.go:282] 0 containers: []
	W0414 13:45:35.741459 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:35.741469 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:35.741544 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:35.786960 1223410 cri.go:89] found id: ""
	I0414 13:45:35.787023 1223410 logs.go:282] 0 containers: []
	W0414 13:45:35.787036 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:35.787043 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:35.787113 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:35.838109 1223410 cri.go:89] found id: ""
	I0414 13:45:35.838150 1223410 logs.go:282] 0 containers: []
	W0414 13:45:35.838163 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:35.838177 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:35.838202 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:35.912472 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:35.912541 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:35.932479 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:35.932525 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:36.024049 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:36.024077 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:36.024096 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:36.122541 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:36.122614 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:38.669900 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:38.690112 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:38.690203 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:38.742995 1223410 cri.go:89] found id: ""
	I0414 13:45:38.743031 1223410 logs.go:282] 0 containers: []
	W0414 13:45:38.743050 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:38.743058 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:38.743134 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:38.790847 1223410 cri.go:89] found id: ""
	I0414 13:45:38.790885 1223410 logs.go:282] 0 containers: []
	W0414 13:45:38.790897 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:38.790904 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:38.790973 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:38.834094 1223410 cri.go:89] found id: ""
	I0414 13:45:38.834125 1223410 logs.go:282] 0 containers: []
	W0414 13:45:38.834133 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:38.834139 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:38.834206 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:38.879270 1223410 cri.go:89] found id: ""
	I0414 13:45:38.879308 1223410 logs.go:282] 0 containers: []
	W0414 13:45:38.879320 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:38.879334 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:38.879404 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:38.923056 1223410 cri.go:89] found id: ""
	I0414 13:45:38.923096 1223410 logs.go:282] 0 containers: []
	W0414 13:45:38.923107 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:38.923115 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:38.923176 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:38.965892 1223410 cri.go:89] found id: ""
	I0414 13:45:38.965920 1223410 logs.go:282] 0 containers: []
	W0414 13:45:38.965928 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:38.965934 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:38.966010 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:39.011608 1223410 cri.go:89] found id: ""
	I0414 13:45:39.011646 1223410 logs.go:282] 0 containers: []
	W0414 13:45:39.011694 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:39.011702 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:39.011769 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:39.048807 1223410 cri.go:89] found id: ""
	I0414 13:45:39.048846 1223410 logs.go:282] 0 containers: []
	W0414 13:45:39.048858 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:39.048869 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:39.048887 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:39.090194 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:39.090235 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:39.151618 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:39.151696 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:39.166601 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:39.166668 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:39.277154 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:39.277182 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:39.277200 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:41.863873 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:41.881922 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:41.882158 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:41.927479 1223410 cri.go:89] found id: ""
	I0414 13:45:41.927524 1223410 logs.go:282] 0 containers: []
	W0414 13:45:41.927536 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:41.927545 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:41.927614 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:41.974056 1223410 cri.go:89] found id: ""
	I0414 13:45:41.974092 1223410 logs.go:282] 0 containers: []
	W0414 13:45:41.974104 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:41.974112 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:41.974183 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:42.015583 1223410 cri.go:89] found id: ""
	I0414 13:45:42.015644 1223410 logs.go:282] 0 containers: []
	W0414 13:45:42.015668 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:42.015678 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:42.015743 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:42.055684 1223410 cri.go:89] found id: ""
	I0414 13:45:42.055739 1223410 logs.go:282] 0 containers: []
	W0414 13:45:42.055750 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:42.055756 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:42.055822 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:42.098798 1223410 cri.go:89] found id: ""
	I0414 13:45:42.098827 1223410 logs.go:282] 0 containers: []
	W0414 13:45:42.098834 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:42.098841 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:42.098895 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:42.147286 1223410 cri.go:89] found id: ""
	I0414 13:45:42.147324 1223410 logs.go:282] 0 containers: []
	W0414 13:45:42.147332 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:42.147338 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:42.147396 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:42.192379 1223410 cri.go:89] found id: ""
	I0414 13:45:42.192471 1223410 logs.go:282] 0 containers: []
	W0414 13:45:42.192489 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:42.192498 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:42.192578 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:42.244994 1223410 cri.go:89] found id: ""
	I0414 13:45:42.245036 1223410 logs.go:282] 0 containers: []
	W0414 13:45:42.245048 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:42.245061 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:42.245079 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:42.307981 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:42.308032 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:42.327939 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:42.327981 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:42.429836 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:42.429870 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:42.429888 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:42.547685 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:42.548241 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:45.145683 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:45.165002 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:45.165103 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:45.219737 1223410 cri.go:89] found id: ""
	I0414 13:45:45.219775 1223410 logs.go:282] 0 containers: []
	W0414 13:45:45.219796 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:45.219804 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:45.219879 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:45.272388 1223410 cri.go:89] found id: ""
	I0414 13:45:45.272416 1223410 logs.go:282] 0 containers: []
	W0414 13:45:45.272424 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:45.272430 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:45.272483 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:45.316270 1223410 cri.go:89] found id: ""
	I0414 13:45:45.316318 1223410 logs.go:282] 0 containers: []
	W0414 13:45:45.316331 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:45.316340 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:45.316423 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:45.365094 1223410 cri.go:89] found id: ""
	I0414 13:45:45.365125 1223410 logs.go:282] 0 containers: []
	W0414 13:45:45.365136 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:45.365144 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:45.365209 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:45.408255 1223410 cri.go:89] found id: ""
	I0414 13:45:45.408298 1223410 logs.go:282] 0 containers: []
	W0414 13:45:45.408321 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:45.408329 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:45.408402 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:45.453983 1223410 cri.go:89] found id: ""
	I0414 13:45:45.454017 1223410 logs.go:282] 0 containers: []
	W0414 13:45:45.454028 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:45.454035 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:45.454103 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:45.496330 1223410 cri.go:89] found id: ""
	I0414 13:45:45.496367 1223410 logs.go:282] 0 containers: []
	W0414 13:45:45.496384 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:45.496392 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:45.496462 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:45.538862 1223410 cri.go:89] found id: ""
	I0414 13:45:45.538905 1223410 logs.go:282] 0 containers: []
	W0414 13:45:45.538919 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:45.538943 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:45.538962 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:45.602885 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:45.602931 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:45.622639 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:45.622752 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:45.722250 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:45.722282 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:45.722311 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:45.819123 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:45.819173 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:48.367819 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:48.383105 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:48.383211 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:48.428615 1223410 cri.go:89] found id: ""
	I0414 13:45:48.428653 1223410 logs.go:282] 0 containers: []
	W0414 13:45:48.428665 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:48.428675 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:48.428747 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:48.468809 1223410 cri.go:89] found id: ""
	I0414 13:45:48.468846 1223410 logs.go:282] 0 containers: []
	W0414 13:45:48.468859 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:48.468868 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:48.468935 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:48.511019 1223410 cri.go:89] found id: ""
	I0414 13:45:48.511057 1223410 logs.go:282] 0 containers: []
	W0414 13:45:48.511069 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:48.511078 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:48.511148 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:48.556980 1223410 cri.go:89] found id: ""
	I0414 13:45:48.557021 1223410 logs.go:282] 0 containers: []
	W0414 13:45:48.557036 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:48.557049 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:48.557116 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:48.623350 1223410 cri.go:89] found id: ""
	I0414 13:45:48.623386 1223410 logs.go:282] 0 containers: []
	W0414 13:45:48.623398 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:48.623410 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:48.623483 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:48.663513 1223410 cri.go:89] found id: ""
	I0414 13:45:48.663546 1223410 logs.go:282] 0 containers: []
	W0414 13:45:48.663557 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:48.663565 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:48.663633 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:48.712569 1223410 cri.go:89] found id: ""
	I0414 13:45:48.712601 1223410 logs.go:282] 0 containers: []
	W0414 13:45:48.712609 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:48.712615 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:48.712677 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:48.753497 1223410 cri.go:89] found id: ""
	I0414 13:45:48.753545 1223410 logs.go:282] 0 containers: []
	W0414 13:45:48.753558 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:48.753572 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:48.753587 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:48.809325 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:48.809371 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:48.827067 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:48.827112 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:48.914848 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:48.914878 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:48.914896 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:49.007795 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:49.007851 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:51.558005 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:51.579305 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:51.579430 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:51.641148 1223410 cri.go:89] found id: ""
	I0414 13:45:51.641184 1223410 logs.go:282] 0 containers: []
	W0414 13:45:51.641196 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:51.641213 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:51.641277 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:51.722699 1223410 cri.go:89] found id: ""
	I0414 13:45:51.722730 1223410 logs.go:282] 0 containers: []
	W0414 13:45:51.722741 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:51.722752 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:51.722812 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:51.766488 1223410 cri.go:89] found id: ""
	I0414 13:45:51.766519 1223410 logs.go:282] 0 containers: []
	W0414 13:45:51.766529 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:51.766536 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:51.766604 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:51.810933 1223410 cri.go:89] found id: ""
	I0414 13:45:51.810991 1223410 logs.go:282] 0 containers: []
	W0414 13:45:51.811003 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:51.811011 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:51.811098 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:51.858671 1223410 cri.go:89] found id: ""
	I0414 13:45:51.858704 1223410 logs.go:282] 0 containers: []
	W0414 13:45:51.858715 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:51.858723 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:51.858786 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:51.907824 1223410 cri.go:89] found id: ""
	I0414 13:45:51.907858 1223410 logs.go:282] 0 containers: []
	W0414 13:45:51.907869 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:51.907877 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:51.907947 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:51.961819 1223410 cri.go:89] found id: ""
	I0414 13:45:51.961862 1223410 logs.go:282] 0 containers: []
	W0414 13:45:51.961874 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:51.961883 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:51.961966 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:52.006377 1223410 cri.go:89] found id: ""
	I0414 13:45:52.006406 1223410 logs.go:282] 0 containers: []
	W0414 13:45:52.006416 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:52.006429 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:52.006445 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:52.061991 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:52.062056 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:52.079896 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:52.079937 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:52.174754 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:52.174788 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:52.174808 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:52.264533 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:52.264587 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:54.816650 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:54.835170 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:54.835242 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:54.887987 1223410 cri.go:89] found id: ""
	I0414 13:45:54.888022 1223410 logs.go:282] 0 containers: []
	W0414 13:45:54.888047 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:54.888057 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:54.888127 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:54.942160 1223410 cri.go:89] found id: ""
	I0414 13:45:54.942199 1223410 logs.go:282] 0 containers: []
	W0414 13:45:54.942212 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:54.942220 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:54.942347 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:54.985915 1223410 cri.go:89] found id: ""
	I0414 13:45:54.986017 1223410 logs.go:282] 0 containers: []
	W0414 13:45:54.986066 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:54.986077 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:54.986148 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:55.037031 1223410 cri.go:89] found id: ""
	I0414 13:45:55.037069 1223410 logs.go:282] 0 containers: []
	W0414 13:45:55.037081 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:55.037089 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:55.037169 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:55.085060 1223410 cri.go:89] found id: ""
	I0414 13:45:55.085093 1223410 logs.go:282] 0 containers: []
	W0414 13:45:55.085105 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:55.085113 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:55.085186 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:55.130640 1223410 cri.go:89] found id: ""
	I0414 13:45:55.130676 1223410 logs.go:282] 0 containers: []
	W0414 13:45:55.130689 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:55.130698 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:55.130767 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:55.177883 1223410 cri.go:89] found id: ""
	I0414 13:45:55.178044 1223410 logs.go:282] 0 containers: []
	W0414 13:45:55.178066 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:55.178078 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:55.178187 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:55.227813 1223410 cri.go:89] found id: ""
	I0414 13:45:55.227857 1223410 logs.go:282] 0 containers: []
	W0414 13:45:55.227871 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:55.227887 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:55.227909 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:55.315631 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:55.315691 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:55.338244 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:55.338283 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:55.438542 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:55.438649 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:55.438719 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:55.556572 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:55.556621 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:45:58.111499 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:45:58.128845 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:45:58.128916 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:45:58.177821 1223410 cri.go:89] found id: ""
	I0414 13:45:58.177863 1223410 logs.go:282] 0 containers: []
	W0414 13:45:58.177876 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:45:58.177885 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:45:58.177980 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:45:58.243317 1223410 cri.go:89] found id: ""
	I0414 13:45:58.243352 1223410 logs.go:282] 0 containers: []
	W0414 13:45:58.243365 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:45:58.243373 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:45:58.243438 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:45:58.292715 1223410 cri.go:89] found id: ""
	I0414 13:45:58.292757 1223410 logs.go:282] 0 containers: []
	W0414 13:45:58.292770 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:45:58.292778 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:45:58.292848 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:45:58.355933 1223410 cri.go:89] found id: ""
	I0414 13:45:58.355970 1223410 logs.go:282] 0 containers: []
	W0414 13:45:58.355982 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:45:58.355991 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:45:58.356057 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:45:58.408579 1223410 cri.go:89] found id: ""
	I0414 13:45:58.408684 1223410 logs.go:282] 0 containers: []
	W0414 13:45:58.408706 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:45:58.408720 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:45:58.408825 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:45:58.460806 1223410 cri.go:89] found id: ""
	I0414 13:45:58.460840 1223410 logs.go:282] 0 containers: []
	W0414 13:45:58.460852 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:45:58.460862 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:45:58.460920 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:45:58.509711 1223410 cri.go:89] found id: ""
	I0414 13:45:58.509744 1223410 logs.go:282] 0 containers: []
	W0414 13:45:58.509756 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:45:58.509764 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:45:58.509823 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:45:58.554292 1223410 cri.go:89] found id: ""
	I0414 13:45:58.554328 1223410 logs.go:282] 0 containers: []
	W0414 13:45:58.554340 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:45:58.554353 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:45:58.554370 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:45:58.614742 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:45:58.614780 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:45:58.631786 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:45:58.631830 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:45:58.744416 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:45:58.744500 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:45:58.744530 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:45:58.838385 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:45:58.838440 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:01.400552 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:01.414064 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:01.414137 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:01.453677 1223410 cri.go:89] found id: ""
	I0414 13:46:01.453713 1223410 logs.go:282] 0 containers: []
	W0414 13:46:01.453724 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:01.453733 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:01.453807 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:01.491927 1223410 cri.go:89] found id: ""
	I0414 13:46:01.491954 1223410 logs.go:282] 0 containers: []
	W0414 13:46:01.491965 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:01.491973 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:01.492023 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:01.532694 1223410 cri.go:89] found id: ""
	I0414 13:46:01.532727 1223410 logs.go:282] 0 containers: []
	W0414 13:46:01.532738 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:01.532746 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:01.532823 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:01.575480 1223410 cri.go:89] found id: ""
	I0414 13:46:01.575516 1223410 logs.go:282] 0 containers: []
	W0414 13:46:01.575528 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:01.575536 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:01.575597 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:01.615360 1223410 cri.go:89] found id: ""
	I0414 13:46:01.615392 1223410 logs.go:282] 0 containers: []
	W0414 13:46:01.615402 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:01.615411 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:01.615478 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:01.653599 1223410 cri.go:89] found id: ""
	I0414 13:46:01.653630 1223410 logs.go:282] 0 containers: []
	W0414 13:46:01.653642 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:01.653648 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:01.653713 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:01.699248 1223410 cri.go:89] found id: ""
	I0414 13:46:01.699289 1223410 logs.go:282] 0 containers: []
	W0414 13:46:01.699303 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:01.699313 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:01.699384 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:01.738156 1223410 cri.go:89] found id: ""
	I0414 13:46:01.738195 1223410 logs.go:282] 0 containers: []
	W0414 13:46:01.738208 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:01.738223 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:01.738240 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:01.808537 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:01.808586 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:01.832110 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:01.832166 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:01.918142 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:01.918338 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:01.918371 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:02.024852 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:02.024914 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:04.579801 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:04.600710 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:04.600795 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:04.641204 1223410 cri.go:89] found id: ""
	I0414 13:46:04.641246 1223410 logs.go:282] 0 containers: []
	W0414 13:46:04.641261 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:04.641272 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:04.641340 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:04.691871 1223410 cri.go:89] found id: ""
	I0414 13:46:04.691911 1223410 logs.go:282] 0 containers: []
	W0414 13:46:04.691937 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:04.691948 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:04.692043 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:04.735962 1223410 cri.go:89] found id: ""
	I0414 13:46:04.736051 1223410 logs.go:282] 0 containers: []
	W0414 13:46:04.736073 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:04.736082 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:04.736158 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:04.773465 1223410 cri.go:89] found id: ""
	I0414 13:46:04.773503 1223410 logs.go:282] 0 containers: []
	W0414 13:46:04.773515 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:04.773529 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:04.773632 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:04.827470 1223410 cri.go:89] found id: ""
	I0414 13:46:04.827527 1223410 logs.go:282] 0 containers: []
	W0414 13:46:04.827539 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:04.827547 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:04.827625 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:04.881974 1223410 cri.go:89] found id: ""
	I0414 13:46:04.882016 1223410 logs.go:282] 0 containers: []
	W0414 13:46:04.882028 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:04.882037 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:04.882103 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:04.934996 1223410 cri.go:89] found id: ""
	I0414 13:46:04.935041 1223410 logs.go:282] 0 containers: []
	W0414 13:46:04.935053 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:04.935060 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:04.935127 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:04.983163 1223410 cri.go:89] found id: ""
	I0414 13:46:04.983195 1223410 logs.go:282] 0 containers: []
	W0414 13:46:04.983205 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:04.983215 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:04.983226 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:05.076318 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:05.076345 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:05.076363 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:05.173288 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:05.173334 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:05.233001 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:05.233142 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:05.302608 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:05.302734 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:07.826680 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:07.846260 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:07.846359 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:07.898286 1223410 cri.go:89] found id: ""
	I0414 13:46:07.898325 1223410 logs.go:282] 0 containers: []
	W0414 13:46:07.898337 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:07.898345 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:07.898402 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:07.961361 1223410 cri.go:89] found id: ""
	I0414 13:46:07.961392 1223410 logs.go:282] 0 containers: []
	W0414 13:46:07.961403 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:07.961412 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:07.961473 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:08.014120 1223410 cri.go:89] found id: ""
	I0414 13:46:08.014157 1223410 logs.go:282] 0 containers: []
	W0414 13:46:08.014169 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:08.014179 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:08.014236 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:08.067834 1223410 cri.go:89] found id: ""
	I0414 13:46:08.067869 1223410 logs.go:282] 0 containers: []
	W0414 13:46:08.067882 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:08.067890 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:08.067958 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:08.112732 1223410 cri.go:89] found id: ""
	I0414 13:46:08.112771 1223410 logs.go:282] 0 containers: []
	W0414 13:46:08.112782 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:08.112791 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:08.112861 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:08.157360 1223410 cri.go:89] found id: ""
	I0414 13:46:08.157400 1223410 logs.go:282] 0 containers: []
	W0414 13:46:08.157413 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:08.157422 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:08.157501 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:08.198668 1223410 cri.go:89] found id: ""
	I0414 13:46:08.198708 1223410 logs.go:282] 0 containers: []
	W0414 13:46:08.198720 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:08.198728 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:08.198802 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:08.241399 1223410 cri.go:89] found id: ""
	I0414 13:46:08.241430 1223410 logs.go:282] 0 containers: []
	W0414 13:46:08.241438 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:08.241449 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:08.241460 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:08.304152 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:08.304309 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:08.325751 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:08.325797 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:08.457326 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:08.457354 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:08.457368 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:08.557360 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:08.557412 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:11.111347 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:11.126775 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:11.126838 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:11.162327 1223410 cri.go:89] found id: ""
	I0414 13:46:11.162360 1223410 logs.go:282] 0 containers: []
	W0414 13:46:11.162371 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:11.162379 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:11.162449 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:11.201247 1223410 cri.go:89] found id: ""
	I0414 13:46:11.201286 1223410 logs.go:282] 0 containers: []
	W0414 13:46:11.201298 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:11.201304 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:11.201376 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:11.237599 1223410 cri.go:89] found id: ""
	I0414 13:46:11.237633 1223410 logs.go:282] 0 containers: []
	W0414 13:46:11.237644 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:11.237657 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:11.237719 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:11.278624 1223410 cri.go:89] found id: ""
	I0414 13:46:11.278670 1223410 logs.go:282] 0 containers: []
	W0414 13:46:11.278689 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:11.278698 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:11.278769 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:11.318113 1223410 cri.go:89] found id: ""
	I0414 13:46:11.318156 1223410 logs.go:282] 0 containers: []
	W0414 13:46:11.318168 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:11.318177 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:11.318246 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:11.362858 1223410 cri.go:89] found id: ""
	I0414 13:46:11.362896 1223410 logs.go:282] 0 containers: []
	W0414 13:46:11.362907 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:11.362916 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:11.362984 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:11.404772 1223410 cri.go:89] found id: ""
	I0414 13:46:11.404809 1223410 logs.go:282] 0 containers: []
	W0414 13:46:11.404818 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:11.404824 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:11.404888 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:11.445307 1223410 cri.go:89] found id: ""
	I0414 13:46:11.445338 1223410 logs.go:282] 0 containers: []
	W0414 13:46:11.445349 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:11.445361 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:11.445377 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:11.466176 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:11.466230 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:11.554728 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:11.554751 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:11.554769 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:11.650975 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:11.651023 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:11.694838 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:11.694870 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:14.250188 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:14.266585 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:14.266665 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:14.311145 1223410 cri.go:89] found id: ""
	I0414 13:46:14.311177 1223410 logs.go:282] 0 containers: []
	W0414 13:46:14.311191 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:14.311204 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:14.311283 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:14.352362 1223410 cri.go:89] found id: ""
	I0414 13:46:14.352396 1223410 logs.go:282] 0 containers: []
	W0414 13:46:14.352410 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:14.352424 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:14.352490 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:14.393252 1223410 cri.go:89] found id: ""
	I0414 13:46:14.393280 1223410 logs.go:282] 0 containers: []
	W0414 13:46:14.393289 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:14.393295 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:14.393351 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:14.432053 1223410 cri.go:89] found id: ""
	I0414 13:46:14.432093 1223410 logs.go:282] 0 containers: []
	W0414 13:46:14.432106 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:14.432113 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:14.432184 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:14.475563 1223410 cri.go:89] found id: ""
	I0414 13:46:14.475594 1223410 logs.go:282] 0 containers: []
	W0414 13:46:14.475603 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:14.475609 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:14.475677 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:14.518387 1223410 cri.go:89] found id: ""
	I0414 13:46:14.518434 1223410 logs.go:282] 0 containers: []
	W0414 13:46:14.518446 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:14.518455 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:14.518529 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:14.558773 1223410 cri.go:89] found id: ""
	I0414 13:46:14.558801 1223410 logs.go:282] 0 containers: []
	W0414 13:46:14.558810 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:14.558816 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:14.558874 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:14.603481 1223410 cri.go:89] found id: ""
	I0414 13:46:14.603517 1223410 logs.go:282] 0 containers: []
	W0414 13:46:14.603529 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:14.603543 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:14.603559 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:14.658247 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:14.658298 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:14.673087 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:14.673138 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:14.749778 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:14.749806 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:14.749823 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:14.843639 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:14.843729 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:17.393422 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:17.407630 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:17.407742 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:17.452591 1223410 cri.go:89] found id: ""
	I0414 13:46:17.452623 1223410 logs.go:282] 0 containers: []
	W0414 13:46:17.452634 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:17.452643 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:17.452708 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:17.496657 1223410 cri.go:89] found id: ""
	I0414 13:46:17.496691 1223410 logs.go:282] 0 containers: []
	W0414 13:46:17.496703 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:17.496712 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:17.496776 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:17.541363 1223410 cri.go:89] found id: ""
	I0414 13:46:17.541397 1223410 logs.go:282] 0 containers: []
	W0414 13:46:17.541409 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:17.541416 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:17.541485 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:17.581762 1223410 cri.go:89] found id: ""
	I0414 13:46:17.581794 1223410 logs.go:282] 0 containers: []
	W0414 13:46:17.581803 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:17.581810 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:17.581876 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:17.620576 1223410 cri.go:89] found id: ""
	I0414 13:46:17.620615 1223410 logs.go:282] 0 containers: []
	W0414 13:46:17.620626 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:17.620634 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:17.620720 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:17.662832 1223410 cri.go:89] found id: ""
	I0414 13:46:17.662874 1223410 logs.go:282] 0 containers: []
	W0414 13:46:17.662888 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:17.662896 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:17.662975 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:17.709911 1223410 cri.go:89] found id: ""
	I0414 13:46:17.709947 1223410 logs.go:282] 0 containers: []
	W0414 13:46:17.709960 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:17.709970 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:17.710048 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:17.747400 1223410 cri.go:89] found id: ""
	I0414 13:46:17.747436 1223410 logs.go:282] 0 containers: []
	W0414 13:46:17.747446 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:17.747459 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:17.747475 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:17.801265 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:17.801319 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:17.817283 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:17.817325 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:17.894445 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:17.894557 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:17.894599 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:17.980435 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:17.980482 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:20.529585 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:20.545088 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:20.545163 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:20.595863 1223410 cri.go:89] found id: ""
	I0414 13:46:20.595896 1223410 logs.go:282] 0 containers: []
	W0414 13:46:20.595908 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:20.595917 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:20.595986 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:20.642770 1223410 cri.go:89] found id: ""
	I0414 13:46:20.642810 1223410 logs.go:282] 0 containers: []
	W0414 13:46:20.642822 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:20.642830 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:20.642892 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:20.684977 1223410 cri.go:89] found id: ""
	I0414 13:46:20.685013 1223410 logs.go:282] 0 containers: []
	W0414 13:46:20.685025 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:20.685032 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:20.685096 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:20.731606 1223410 cri.go:89] found id: ""
	I0414 13:46:20.731648 1223410 logs.go:282] 0 containers: []
	W0414 13:46:20.731690 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:20.731699 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:20.731774 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:20.788027 1223410 cri.go:89] found id: ""
	I0414 13:46:20.788061 1223410 logs.go:282] 0 containers: []
	W0414 13:46:20.788076 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:20.788082 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:20.788138 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:20.832150 1223410 cri.go:89] found id: ""
	I0414 13:46:20.832179 1223410 logs.go:282] 0 containers: []
	W0414 13:46:20.832192 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:20.832199 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:20.832267 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:20.875992 1223410 cri.go:89] found id: ""
	I0414 13:46:20.876027 1223410 logs.go:282] 0 containers: []
	W0414 13:46:20.876039 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:20.876049 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:20.876115 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:20.915964 1223410 cri.go:89] found id: ""
	I0414 13:46:20.916007 1223410 logs.go:282] 0 containers: []
	W0414 13:46:20.916018 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:20.916032 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:20.916048 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:20.932760 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:20.932843 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:21.017725 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:21.017759 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:21.017778 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:21.118811 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:21.118861 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:21.161958 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:21.162007 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:23.732913 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:23.749353 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:23.749437 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:23.789940 1223410 cri.go:89] found id: ""
	I0414 13:46:23.789977 1223410 logs.go:282] 0 containers: []
	W0414 13:46:23.789989 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:23.789997 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:23.790071 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:23.848440 1223410 cri.go:89] found id: ""
	I0414 13:46:23.848465 1223410 logs.go:282] 0 containers: []
	W0414 13:46:23.848473 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:23.848479 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:23.848539 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:23.891143 1223410 cri.go:89] found id: ""
	I0414 13:46:23.891174 1223410 logs.go:282] 0 containers: []
	W0414 13:46:23.891182 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:23.891189 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:23.891258 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:23.932954 1223410 cri.go:89] found id: ""
	I0414 13:46:23.932974 1223410 logs.go:282] 0 containers: []
	W0414 13:46:23.932984 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:23.932992 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:23.933047 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:23.993541 1223410 cri.go:89] found id: ""
	I0414 13:46:23.993581 1223410 logs.go:282] 0 containers: []
	W0414 13:46:23.993593 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:23.993601 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:23.993669 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:24.042851 1223410 cri.go:89] found id: ""
	I0414 13:46:24.042888 1223410 logs.go:282] 0 containers: []
	W0414 13:46:24.042913 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:24.042931 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:24.043035 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:24.086588 1223410 cri.go:89] found id: ""
	I0414 13:46:24.086643 1223410 logs.go:282] 0 containers: []
	W0414 13:46:24.086653 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:24.086659 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:24.086714 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:24.129615 1223410 cri.go:89] found id: ""
	I0414 13:46:24.129659 1223410 logs.go:282] 0 containers: []
	W0414 13:46:24.129673 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:24.129687 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:24.129702 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:24.209693 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:24.209749 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:24.249875 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:24.249915 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:24.302762 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:24.302808 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:24.318136 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:24.318167 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:24.387923 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:26.889665 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:26.905264 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:26.905342 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:26.942006 1223410 cri.go:89] found id: ""
	I0414 13:46:26.942045 1223410 logs.go:282] 0 containers: []
	W0414 13:46:26.942054 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:26.942061 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:26.942116 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:26.986470 1223410 cri.go:89] found id: ""
	I0414 13:46:26.986513 1223410 logs.go:282] 0 containers: []
	W0414 13:46:26.986526 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:26.986534 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:26.986605 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:27.024399 1223410 cri.go:89] found id: ""
	I0414 13:46:27.024435 1223410 logs.go:282] 0 containers: []
	W0414 13:46:27.024444 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:27.024451 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:27.024519 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:27.065366 1223410 cri.go:89] found id: ""
	I0414 13:46:27.065400 1223410 logs.go:282] 0 containers: []
	W0414 13:46:27.065408 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:27.065416 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:27.065474 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:27.104110 1223410 cri.go:89] found id: ""
	I0414 13:46:27.104153 1223410 logs.go:282] 0 containers: []
	W0414 13:46:27.104162 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:27.104170 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:27.104249 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:27.141793 1223410 cri.go:89] found id: ""
	I0414 13:46:27.141825 1223410 logs.go:282] 0 containers: []
	W0414 13:46:27.141841 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:27.141850 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:27.141918 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:27.179926 1223410 cri.go:89] found id: ""
	I0414 13:46:27.179976 1223410 logs.go:282] 0 containers: []
	W0414 13:46:27.179989 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:27.179998 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:27.180081 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:27.216471 1223410 cri.go:89] found id: ""
	I0414 13:46:27.216512 1223410 logs.go:282] 0 containers: []
	W0414 13:46:27.216522 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:27.216532 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:27.216545 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:27.258847 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:27.258885 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:27.316357 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:27.316405 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:27.332437 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:27.332475 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:27.415405 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:27.415435 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:27.415453 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:30.001480 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:30.014853 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:30.014921 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:30.050301 1223410 cri.go:89] found id: ""
	I0414 13:46:30.050337 1223410 logs.go:282] 0 containers: []
	W0414 13:46:30.050346 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:30.050352 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:30.050412 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:30.088755 1223410 cri.go:89] found id: ""
	I0414 13:46:30.088790 1223410 logs.go:282] 0 containers: []
	W0414 13:46:30.088802 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:30.088811 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:30.088884 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:30.129048 1223410 cri.go:89] found id: ""
	I0414 13:46:30.129085 1223410 logs.go:282] 0 containers: []
	W0414 13:46:30.129094 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:30.129101 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:30.129164 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:30.164622 1223410 cri.go:89] found id: ""
	I0414 13:46:30.164658 1223410 logs.go:282] 0 containers: []
	W0414 13:46:30.164670 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:30.164682 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:30.164752 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:30.201199 1223410 cri.go:89] found id: ""
	I0414 13:46:30.201229 1223410 logs.go:282] 0 containers: []
	W0414 13:46:30.201239 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:30.201245 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:30.201309 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:30.250597 1223410 cri.go:89] found id: ""
	I0414 13:46:30.250635 1223410 logs.go:282] 0 containers: []
	W0414 13:46:30.250648 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:30.250657 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:30.250719 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:30.296492 1223410 cri.go:89] found id: ""
	I0414 13:46:30.296531 1223410 logs.go:282] 0 containers: []
	W0414 13:46:30.296543 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:30.296551 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:30.296618 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:30.340384 1223410 cri.go:89] found id: ""
	I0414 13:46:30.340418 1223410 logs.go:282] 0 containers: []
	W0414 13:46:30.340426 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:30.340437 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:30.340451 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:30.394745 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:30.394795 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:30.408491 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:30.408525 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:30.482607 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:30.482639 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:30.482724 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:30.570290 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:30.570348 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:33.114536 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:33.129839 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:33.129917 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:33.167609 1223410 cri.go:89] found id: ""
	I0414 13:46:33.167649 1223410 logs.go:282] 0 containers: []
	W0414 13:46:33.167684 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:33.167694 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:33.167761 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:33.207849 1223410 cri.go:89] found id: ""
	I0414 13:46:33.207878 1223410 logs.go:282] 0 containers: []
	W0414 13:46:33.207887 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:33.207893 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:33.207945 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:33.243092 1223410 cri.go:89] found id: ""
	I0414 13:46:33.243122 1223410 logs.go:282] 0 containers: []
	W0414 13:46:33.243134 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:33.243140 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:33.243201 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:33.281579 1223410 cri.go:89] found id: ""
	I0414 13:46:33.281615 1223410 logs.go:282] 0 containers: []
	W0414 13:46:33.281624 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:33.281630 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:33.281686 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:33.321181 1223410 cri.go:89] found id: ""
	I0414 13:46:33.321234 1223410 logs.go:282] 0 containers: []
	W0414 13:46:33.321244 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:33.321250 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:33.321312 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:33.356996 1223410 cri.go:89] found id: ""
	I0414 13:46:33.357031 1223410 logs.go:282] 0 containers: []
	W0414 13:46:33.357042 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:33.357049 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:33.357105 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:33.394307 1223410 cri.go:89] found id: ""
	I0414 13:46:33.394340 1223410 logs.go:282] 0 containers: []
	W0414 13:46:33.394347 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:33.394354 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:33.394405 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:33.432512 1223410 cri.go:89] found id: ""
	I0414 13:46:33.432545 1223410 logs.go:282] 0 containers: []
	W0414 13:46:33.432554 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:33.432565 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:33.432576 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:33.476182 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:33.476218 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:33.531455 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:33.531512 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:33.546475 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:33.546516 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:33.621930 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:33.621964 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:33.622002 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:36.199182 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:36.219777 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:36.219861 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:36.270834 1223410 cri.go:89] found id: ""
	I0414 13:46:36.270871 1223410 logs.go:282] 0 containers: []
	W0414 13:46:36.270884 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:36.270892 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:36.271015 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:36.335188 1223410 cri.go:89] found id: ""
	I0414 13:46:36.335221 1223410 logs.go:282] 0 containers: []
	W0414 13:46:36.335234 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:36.335242 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:36.335334 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:36.390761 1223410 cri.go:89] found id: ""
	I0414 13:46:36.390792 1223410 logs.go:282] 0 containers: []
	W0414 13:46:36.390803 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:36.390811 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:36.390903 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:36.437318 1223410 cri.go:89] found id: ""
	I0414 13:46:36.437354 1223410 logs.go:282] 0 containers: []
	W0414 13:46:36.437365 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:36.437374 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:36.437466 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:36.484169 1223410 cri.go:89] found id: ""
	I0414 13:46:36.484208 1223410 logs.go:282] 0 containers: []
	W0414 13:46:36.484221 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:36.484230 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:36.484384 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:36.534271 1223410 cri.go:89] found id: ""
	I0414 13:46:36.534312 1223410 logs.go:282] 0 containers: []
	W0414 13:46:36.534326 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:36.534334 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:36.534412 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:36.576165 1223410 cri.go:89] found id: ""
	I0414 13:46:36.576258 1223410 logs.go:282] 0 containers: []
	W0414 13:46:36.576275 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:36.576287 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:36.576390 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:36.625563 1223410 cri.go:89] found id: ""
	I0414 13:46:36.625602 1223410 logs.go:282] 0 containers: []
	W0414 13:46:36.625616 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:36.625631 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:36.625654 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:36.681084 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:36.681130 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:36.696762 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:36.696807 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:36.782861 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:36.782895 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:36.782915 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:36.881202 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:36.881229 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:39.427839 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:39.450323 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:39.450422 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:39.599823 1223410 cri.go:89] found id: ""
	I0414 13:46:39.599863 1223410 logs.go:282] 0 containers: []
	W0414 13:46:39.599876 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:39.599885 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:39.599953 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:39.660031 1223410 cri.go:89] found id: ""
	I0414 13:46:39.660065 1223410 logs.go:282] 0 containers: []
	W0414 13:46:39.660081 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:39.660088 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:39.660143 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:39.768703 1223410 cri.go:89] found id: ""
	I0414 13:46:39.768728 1223410 logs.go:282] 0 containers: []
	W0414 13:46:39.768736 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:39.768742 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:39.768792 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:39.839683 1223410 cri.go:89] found id: ""
	I0414 13:46:39.839723 1223410 logs.go:282] 0 containers: []
	W0414 13:46:39.839742 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:39.839749 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:39.839809 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:39.931788 1223410 cri.go:89] found id: ""
	I0414 13:46:39.931827 1223410 logs.go:282] 0 containers: []
	W0414 13:46:39.931839 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:39.931848 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:39.931912 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:39.993160 1223410 cri.go:89] found id: ""
	I0414 13:46:39.993197 1223410 logs.go:282] 0 containers: []
	W0414 13:46:39.993210 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:39.993220 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:39.993296 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:40.043127 1223410 cri.go:89] found id: ""
	I0414 13:46:40.043156 1223410 logs.go:282] 0 containers: []
	W0414 13:46:40.043167 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:40.043175 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:40.043230 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:40.088433 1223410 cri.go:89] found id: ""
	I0414 13:46:40.088474 1223410 logs.go:282] 0 containers: []
	W0414 13:46:40.088490 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:40.088503 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:40.088520 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:40.104753 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:40.104807 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:40.254081 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:40.254110 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:40.254132 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:40.400442 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:40.400552 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:40.467949 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:40.467995 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:43.058696 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:43.076717 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:43.076810 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:43.129265 1223410 cri.go:89] found id: ""
	I0414 13:46:43.129302 1223410 logs.go:282] 0 containers: []
	W0414 13:46:43.129314 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:43.129322 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:43.129413 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:43.182348 1223410 cri.go:89] found id: ""
	I0414 13:46:43.182418 1223410 logs.go:282] 0 containers: []
	W0414 13:46:43.182432 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:43.182442 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:43.182525 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:43.229616 1223410 cri.go:89] found id: ""
	I0414 13:46:43.229660 1223410 logs.go:282] 0 containers: []
	W0414 13:46:43.229673 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:43.229703 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:43.229785 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:43.274131 1223410 cri.go:89] found id: ""
	I0414 13:46:43.274164 1223410 logs.go:282] 0 containers: []
	W0414 13:46:43.274177 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:43.274186 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:43.274252 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:43.317634 1223410 cri.go:89] found id: ""
	I0414 13:46:43.317672 1223410 logs.go:282] 0 containers: []
	W0414 13:46:43.317685 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:43.317694 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:43.317764 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:43.373059 1223410 cri.go:89] found id: ""
	I0414 13:46:43.373090 1223410 logs.go:282] 0 containers: []
	W0414 13:46:43.373101 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:43.373114 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:43.373195 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:43.418209 1223410 cri.go:89] found id: ""
	I0414 13:46:43.418241 1223410 logs.go:282] 0 containers: []
	W0414 13:46:43.418253 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:43.418261 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:43.418326 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:43.455407 1223410 cri.go:89] found id: ""
	I0414 13:46:43.455446 1223410 logs.go:282] 0 containers: []
	W0414 13:46:43.455456 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:43.455467 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:43.455484 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:43.538244 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:43.538282 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:43.538300 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:43.646629 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:43.646716 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:43.704966 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:43.705032 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:43.773062 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:43.773133 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:46.294779 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:46.311241 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:46:46.311368 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:46:46.363595 1223410 cri.go:89] found id: ""
	I0414 13:46:46.363662 1223410 logs.go:282] 0 containers: []
	W0414 13:46:46.363675 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:46:46.363684 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:46:46.363771 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:46:46.414333 1223410 cri.go:89] found id: ""
	I0414 13:46:46.414374 1223410 logs.go:282] 0 containers: []
	W0414 13:46:46.414386 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:46:46.414394 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:46:46.414484 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:46:46.467012 1223410 cri.go:89] found id: ""
	I0414 13:46:46.467049 1223410 logs.go:282] 0 containers: []
	W0414 13:46:46.467061 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:46:46.467069 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:46:46.467141 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:46:46.520192 1223410 cri.go:89] found id: ""
	I0414 13:46:46.520222 1223410 logs.go:282] 0 containers: []
	W0414 13:46:46.520234 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:46:46.520243 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:46:46.520318 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:46:46.565183 1223410 cri.go:89] found id: ""
	I0414 13:46:46.565220 1223410 logs.go:282] 0 containers: []
	W0414 13:46:46.565233 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:46:46.565241 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:46:46.565313 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:46:46.616200 1223410 cri.go:89] found id: ""
	I0414 13:46:46.616266 1223410 logs.go:282] 0 containers: []
	W0414 13:46:46.616283 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:46:46.616293 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:46:46.616372 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:46:46.664388 1223410 cri.go:89] found id: ""
	I0414 13:46:46.664423 1223410 logs.go:282] 0 containers: []
	W0414 13:46:46.664432 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:46:46.664440 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:46:46.664508 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:46:46.705996 1223410 cri.go:89] found id: ""
	I0414 13:46:46.706035 1223410 logs.go:282] 0 containers: []
	W0414 13:46:46.706044 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:46:46.706054 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:46:46.706070 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 13:46:46.822069 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:46:46.822122 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:46:46.869877 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:46:46.869913 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:46:46.931474 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:46:46.931542 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:46:46.950800 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:46:46.950847 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:46:47.025762 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:46:49.526752 1223410 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:46:49.542981 1223410 kubeadm.go:597] duration metric: took 4m2.230594548s to restartPrimaryControlPlane
	W0414 13:46:49.543107 1223410 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 13:46:49.543160 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 13:46:53.377651 1223410 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.834467333s)
	I0414 13:46:53.377737 1223410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:46:53.396074 1223410 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:46:53.429240 1223410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:46:53.466493 1223410 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:46:53.466515 1223410 kubeadm.go:157] found existing configuration files:
	
	I0414 13:46:53.466568 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:46:53.497168 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:46:53.497231 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:46:53.513398 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:46:53.526041 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:46:53.526127 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:46:53.538924 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:46:53.549716 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:46:53.549788 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:46:53.561025 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:46:53.571617 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:46:53.571734 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:46:53.586894 1223410 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:46:53.697925 1223410 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 13:46:53.698247 1223410 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:46:53.862575 1223410 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:46:53.862718 1223410 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:46:53.862834 1223410 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 13:46:54.091340 1223410 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:46:54.093956 1223410 out.go:235]   - Generating certificates and keys ...
	I0414 13:46:54.094095 1223410 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:46:54.094216 1223410 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:46:54.094356 1223410 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 13:46:54.094446 1223410 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 13:46:54.094559 1223410 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 13:46:54.094640 1223410 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 13:46:54.094735 1223410 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 13:46:54.094829 1223410 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 13:46:54.094946 1223410 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 13:46:54.095051 1223410 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 13:46:54.095103 1223410 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 13:46:54.095186 1223410 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:46:54.178995 1223410 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:46:54.595287 1223410 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:46:54.815479 1223410 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:46:54.915868 1223410 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:46:54.937503 1223410 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:46:54.938602 1223410 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:46:54.938721 1223410 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:46:55.155385 1223410 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:46:55.157638 1223410 out.go:235]   - Booting up control plane ...
	I0414 13:46:55.157822 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:46:55.169633 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:46:55.171397 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:46:55.172753 1223410 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:46:55.176774 1223410 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 13:47:35.178762 1223410 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 13:47:35.179264 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:47:35.179477 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:47:40.179890 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:47:40.180197 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:47:50.181003 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:47:50.181366 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:48:10.182173 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:48:10.182474 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:48:50.184698 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:48:50.184977 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:48:50.185017 1223410 kubeadm.go:310] 
	I0414 13:48:50.185092 1223410 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 13:48:50.185148 1223410 kubeadm.go:310] 		timed out waiting for the condition
	I0414 13:48:50.185160 1223410 kubeadm.go:310] 
	I0414 13:48:50.185211 1223410 kubeadm.go:310] 	This error is likely caused by:
	I0414 13:48:50.185266 1223410 kubeadm.go:310] 		- The kubelet is not running
	I0414 13:48:50.185405 1223410 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 13:48:50.185417 1223410 kubeadm.go:310] 
	I0414 13:48:50.185554 1223410 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 13:48:50.185602 1223410 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 13:48:50.185648 1223410 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 13:48:50.185658 1223410 kubeadm.go:310] 
	I0414 13:48:50.185794 1223410 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 13:48:50.185908 1223410 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 13:48:50.185921 1223410 kubeadm.go:310] 
	I0414 13:48:50.186068 1223410 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 13:48:50.186198 1223410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 13:48:50.186316 1223410 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 13:48:50.186417 1223410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 13:48:50.186429 1223410 kubeadm.go:310] 
	I0414 13:48:50.187376 1223410 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:48:50.187531 1223410 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 13:48:50.187633 1223410 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 13:48:50.187832 1223410 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 13:48:50.187896 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 13:48:51.882669 1223410 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.694725628s)
	I0414 13:48:51.882779 1223410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:48:51.902283 1223410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:48:51.913838 1223410 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:48:51.913866 1223410 kubeadm.go:157] found existing configuration files:
	
	I0414 13:48:51.913916 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:48:51.924784 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:48:51.924856 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:48:51.936027 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:48:51.946946 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:48:51.947066 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:48:51.958515 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:48:51.969070 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:48:51.969155 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:48:51.981181 1223410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:48:51.991763 1223410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:48:51.991856 1223410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:48:52.003120 1223410 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:48:52.279343 1223410 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:50:48.524155 1223410 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 13:50:48.524328 1223410 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 13:50:48.525904 1223410 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 13:50:48.525995 1223410 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:50:48.526105 1223410 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:50:48.526269 1223410 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:50:48.526421 1223410 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 13:50:48.526514 1223410 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:50:48.528418 1223410 out.go:235]   - Generating certificates and keys ...
	I0414 13:50:48.528530 1223410 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:50:48.528624 1223410 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:50:48.528765 1223410 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 13:50:48.528871 1223410 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 13:50:48.528983 1223410 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 13:50:48.529064 1223410 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 13:50:48.529155 1223410 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 13:50:48.529254 1223410 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 13:50:48.529417 1223410 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 13:50:48.529560 1223410 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 13:50:48.529604 1223410 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 13:50:48.529704 1223410 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:50:48.529789 1223410 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:50:48.529839 1223410 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:50:48.529919 1223410 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:50:48.530000 1223410 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:50:48.530167 1223410 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:50:48.530286 1223410 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:50:48.530362 1223410 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:50:48.530461 1223410 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:50:48.532385 1223410 out.go:235]   - Booting up control plane ...
	I0414 13:50:48.532556 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:50:48.532689 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:50:48.532768 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:50:48.532843 1223410 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:50:48.533084 1223410 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 13:50:48.533159 1223410 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 13:50:48.533265 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.533525 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.533594 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.533814 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.533912 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534108 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534172 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534394 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534516 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534801 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534821 1223410 kubeadm.go:310] 
	I0414 13:50:48.534885 1223410 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 13:50:48.534955 1223410 kubeadm.go:310] 		timed out waiting for the condition
	I0414 13:50:48.534967 1223410 kubeadm.go:310] 
	I0414 13:50:48.535000 1223410 kubeadm.go:310] 	This error is likely caused by:
	I0414 13:50:48.535047 1223410 kubeadm.go:310] 		- The kubelet is not running
	I0414 13:50:48.535180 1223410 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 13:50:48.535194 1223410 kubeadm.go:310] 
	I0414 13:50:48.535371 1223410 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 13:50:48.535439 1223410 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 13:50:48.535500 1223410 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 13:50:48.535585 1223410 kubeadm.go:310] 
	I0414 13:50:48.535769 1223410 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 13:50:48.535905 1223410 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 13:50:48.535922 1223410 kubeadm.go:310] 
	I0414 13:50:48.536089 1223410 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 13:50:48.536225 1223410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 13:50:48.536329 1223410 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 13:50:48.536413 1223410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 13:50:48.536508 1223410 kubeadm.go:310] 
	I0414 13:50:48.536517 1223410 kubeadm.go:394] duration metric: took 8m1.284425887s to StartCluster
	I0414 13:50:48.536575 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:50:48.536648 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:50:48.585550 1223410 cri.go:89] found id: ""
	I0414 13:50:48.585590 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.585601 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:50:48.585609 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:50:48.585672 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:50:48.626898 1223410 cri.go:89] found id: ""
	I0414 13:50:48.626928 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.626940 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:50:48.626948 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:50:48.627009 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:50:48.670274 1223410 cri.go:89] found id: ""
	I0414 13:50:48.670317 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.670330 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:50:48.670338 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:50:48.670411 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:50:48.720563 1223410 cri.go:89] found id: ""
	I0414 13:50:48.720600 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.720611 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:50:48.720619 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:50:48.720686 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:50:48.767764 1223410 cri.go:89] found id: ""
	I0414 13:50:48.767799 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.767807 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:50:48.767814 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:50:48.767866 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:50:48.818486 1223410 cri.go:89] found id: ""
	I0414 13:50:48.818531 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.818544 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:50:48.818553 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:50:48.818619 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:50:48.867564 1223410 cri.go:89] found id: ""
	I0414 13:50:48.867644 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.867692 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:50:48.867706 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:50:48.867774 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:50:48.906916 1223410 cri.go:89] found id: ""
	I0414 13:50:48.906950 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.906958 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:50:48.906971 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:50:48.906988 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:50:48.955626 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:50:48.955683 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:50:49.022469 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:50:49.022525 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:50:49.041402 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:50:49.041449 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:50:49.131342 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:50:49.131373 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:50:49.131392 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0414 13:50:49.248634 1223410 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 13:50:49.248726 1223410 out.go:270] * 
	* 
	W0414 13:50:49.248809 1223410 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:50:49.248828 1223410 out.go:270] * 
	* 
	W0414 13:50:49.249735 1223410 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 13:50:49.253971 1223410 out.go:201] 
	W0414 13:50:49.255696 1223410 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:50:49.255776 1223410 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 13:50:49.255807 1223410 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 13:50:49.257975 1223410 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-966509 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 2 (287.683802ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-966509 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-966509 logs -n 25: (1.06212477s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-734713                             | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713                             | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713                             | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713                             | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | cat /etc/docker/daemon.json                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC |                     |
	|         | docker system info                                   |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713                             | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo cat                    | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo cat                    | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713                             | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo cat                    | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713                             | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | find /etc/crio -type f -exec                         |                           |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-734713 sudo                        | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | crio config                                          |                           |         |         |                     |                     |
	| delete  | -p custom-flannel-734713                             | custom-flannel-734713     | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	| start   | -p bridge-734713 --memory=3072                       | bridge-734713             | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-734713                         | enable-default-cni-734713 | jenkins | v1.35.0 | 14 Apr 25 13:50 UTC | 14 Apr 25 13:50 UTC |
	|         | pgrep -a kubelet                                     |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 13:50:31
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 13:50:31.474135 1234466 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:50:31.474253 1234466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:50:31.474257 1234466 out.go:358] Setting ErrFile to fd 2...
	I0414 13:50:31.474262 1234466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:50:31.474520 1234466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:50:31.475288 1234466 out.go:352] Setting JSON to false
	I0414 13:50:31.477061 1234466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19979,"bootTime":1744618653,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:50:31.477154 1234466 start.go:139] virtualization: kvm guest
	I0414 13:50:31.479607 1234466 out.go:177] * [bridge-734713] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:50:31.481863 1234466 notify.go:220] Checking for updates...
	I0414 13:50:31.481878 1234466 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 13:50:31.483700 1234466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:50:31.485289 1234466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:50:31.487251 1234466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:50:31.489524 1234466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:50:31.491617 1234466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:50:31.494384 1234466 config.go:182] Loaded profile config "enable-default-cni-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:50:31.494599 1234466 config.go:182] Loaded profile config "flannel-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:50:31.494768 1234466 config.go:182] Loaded profile config "old-k8s-version-966509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 13:50:31.494943 1234466 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:50:31.538765 1234466 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 13:50:31.540246 1234466 start.go:297] selected driver: kvm2
	I0414 13:50:31.540269 1234466 start.go:901] validating driver "kvm2" against <nil>
	I0414 13:50:31.540283 1234466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:50:31.541164 1234466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:50:31.541264 1234466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20384-1167927/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 13:50:31.559397 1234466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 13:50:31.559459 1234466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 13:50:31.559769 1234466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:50:31.559813 1234466 cni.go:84] Creating CNI manager for "bridge"
	I0414 13:50:31.559821 1234466 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 13:50:31.559887 1234466 start.go:340] cluster config:
	{Name:bridge-734713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-734713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:50:31.560014 1234466 iso.go:125] acquiring lock: {Name:mk8f809cb461c81c5ffa481f4cedc4ad92252720 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:50:31.562179 1234466 out.go:177] * Starting "bridge-734713" primary control-plane node in "bridge-734713" cluster
	I0414 13:50:29.334946 1231023 pod_ready.go:103] pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace has status "Ready":"False"
	I0414 13:50:31.833321 1231023 pod_ready.go:98] pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:31 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.69 HostIPs:[{IP:192.168.39.
69}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 13:50:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 13:50:21 +0000 UTC,FinishedAt:2025-04-14 13:50:31 +0000 UTC,ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3 Started:0xc000545230 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0007929a0} {Name:kube-api-access-gz8ls MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0007929b0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 13:50:31.833367 1231023 pod_ready.go:82] duration metric: took 12.006367856s for pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace to be "Ready" ...
	E0414 13:50:31.833383 1231023 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:31 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.69 HostIPs:[{IP:192.168.39.69}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 13:50:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 13:50:21 +0000 UTC,FinishedAt:2025-04-14 13:50:31 +0000 UTC,ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3 Started:0xc000545230 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0007929a0} {Name:kube-api-access-gz8ls MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0007929b0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 13:50:31.833400 1231023 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-wc7z8" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.838889 1231023 pod_ready.go:93] pod "coredns-668d6bf9bc-wc7z8" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.838917 1231023 pod_ready.go:82] duration metric: took 5.507401ms for pod "coredns-668d6bf9bc-wc7z8" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.838931 1231023 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.846654 1231023 pod_ready.go:93] pod "etcd-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.846680 1231023 pod_ready.go:82] duration metric: took 7.739982ms for pod "etcd-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.846693 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.851573 1231023 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.851599 1231023 pod_ready.go:82] duration metric: took 4.900716ms for pod "kube-apiserver-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.851610 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.861178 1231023 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.861205 1231023 pod_ready.go:82] duration metric: took 9.588121ms for pod "kube-controller-manager-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.861215 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-9w89x" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.231339 1231023 pod_ready.go:93] pod "kube-proxy-9w89x" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:32.231363 1231023 pod_ready.go:82] duration metric: took 370.139759ms for pod "kube-proxy-9w89x" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.231373 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.630989 1231023 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:32.631015 1231023 pod_ready.go:82] duration metric: took 399.636056ms for pod "kube-scheduler-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.631024 1231023 pod_ready.go:39] duration metric: took 12.810229756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:50:32.631043 1231023 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:50:32.631107 1231023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:50:32.645651 1231023 api_server.go:72] duration metric: took 13.222143925s to wait for apiserver process to appear ...
	I0414 13:50:32.645687 1231023 api_server.go:88] waiting for apiserver healthz status ...
	I0414 13:50:32.645709 1231023 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I0414 13:50:32.651253 1231023 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I0414 13:50:32.652492 1231023 api_server.go:141] control plane version: v1.32.2
	I0414 13:50:32.652525 1231023 api_server.go:131] duration metric: took 6.829312ms to wait for apiserver health ...
	I0414 13:50:32.652539 1231023 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 13:50:32.832476 1231023 system_pods.go:59] 7 kube-system pods found
	I0414 13:50:32.832516 1231023 system_pods.go:61] "coredns-668d6bf9bc-wc7z8" [0f79f746-c943-4e60-a284-492bb981e61f] Running
	I0414 13:50:32.832522 1231023 system_pods.go:61] "etcd-enable-default-cni-734713" [19510a8c-d3ce-4d2d-ae16-913bbdf644aa] Running
	I0414 13:50:32.832527 1231023 system_pods.go:61] "kube-apiserver-enable-default-cni-734713" [adb74c7b-1709-4a20-b0af-e71ceff39a2c] Running
	I0414 13:50:32.832531 1231023 system_pods.go:61] "kube-controller-manager-enable-default-cni-734713" [1ac3ccc9-9356-4859-a021-41a7adf3620d] Running
	I0414 13:50:32.832534 1231023 system_pods.go:61] "kube-proxy-9w89x" [ced87d7f-cfc0-4474-bbff-273bf081d028] Running
	I0414 13:50:32.832539 1231023 system_pods.go:61] "kube-scheduler-enable-default-cni-734713" [5be3eb8d-6845-4526-b0bd-e870fa09ab3d] Running
	I0414 13:50:32.832542 1231023 system_pods.go:61] "storage-provisioner" [a321a108-c3a8-47b6-bfaa-c32b99f04b1e] Running
	I0414 13:50:32.832548 1231023 system_pods.go:74] duration metric: took 180.003646ms to wait for pod list to return data ...
	I0414 13:50:32.832556 1231023 default_sa.go:34] waiting for default service account to be created ...
	I0414 13:50:33.031788 1231023 default_sa.go:45] found service account: "default"
	I0414 13:50:33.031827 1231023 default_sa.go:55] duration metric: took 199.260003ms for default service account to be created ...
	I0414 13:50:33.031842 1231023 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 13:50:33.232342 1231023 system_pods.go:86] 7 kube-system pods found
	I0414 13:50:33.232377 1231023 system_pods.go:89] "coredns-668d6bf9bc-wc7z8" [0f79f746-c943-4e60-a284-492bb981e61f] Running
	I0414 13:50:33.232383 1231023 system_pods.go:89] "etcd-enable-default-cni-734713" [19510a8c-d3ce-4d2d-ae16-913bbdf644aa] Running
	I0414 13:50:33.232387 1231023 system_pods.go:89] "kube-apiserver-enable-default-cni-734713" [adb74c7b-1709-4a20-b0af-e71ceff39a2c] Running
	I0414 13:50:33.232391 1231023 system_pods.go:89] "kube-controller-manager-enable-default-cni-734713" [1ac3ccc9-9356-4859-a021-41a7adf3620d] Running
	I0414 13:50:33.232395 1231023 system_pods.go:89] "kube-proxy-9w89x" [ced87d7f-cfc0-4474-bbff-273bf081d028] Running
	I0414 13:50:33.232399 1231023 system_pods.go:89] "kube-scheduler-enable-default-cni-734713" [5be3eb8d-6845-4526-b0bd-e870fa09ab3d] Running
	I0414 13:50:33.232402 1231023 system_pods.go:89] "storage-provisioner" [a321a108-c3a8-47b6-bfaa-c32b99f04b1e] Running
	I0414 13:50:33.232408 1231023 system_pods.go:126] duration metric: took 200.561466ms to wait for k8s-apps to be running ...
	I0414 13:50:33.232415 1231023 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 13:50:33.232464 1231023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:50:33.247725 1231023 system_svc.go:56] duration metric: took 15.294005ms WaitForService to wait for kubelet
	I0414 13:50:33.247763 1231023 kubeadm.go:582] duration metric: took 13.824265507s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:50:33.247796 1231023 node_conditions.go:102] verifying NodePressure condition ...
	I0414 13:50:33.431949 1231023 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 13:50:33.431992 1231023 node_conditions.go:123] node cpu capacity is 2
	I0414 13:50:33.432010 1231023 node_conditions.go:105] duration metric: took 184.207615ms to run NodePressure ...
	I0414 13:50:33.432026 1231023 start.go:241] waiting for startup goroutines ...
	I0414 13:50:33.432036 1231023 start.go:246] waiting for cluster config update ...
	I0414 13:50:33.432076 1231023 start.go:255] writing updated cluster config ...
	I0414 13:50:33.432420 1231023 ssh_runner.go:195] Run: rm -f paused
	I0414 13:50:33.493793 1231023 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 13:50:33.496245 1231023 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-734713" cluster and "default" namespace by default
	I0414 13:50:29.976954 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:29.978228 1232896 main.go:141] libmachine: (flannel-734713) DBG | unable to find current IP address of domain flannel-734713 in network mk-flannel-734713
	I0414 13:50:29.978298 1232896 main.go:141] libmachine: (flannel-734713) DBG | I0414 13:50:29.977978 1232920 retry.go:31] will retry after 3.36970346s: waiting for domain to come up
	I0414 13:50:33.352083 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:33.352787 1232896 main.go:141] libmachine: (flannel-734713) DBG | unable to find current IP address of domain flannel-734713 in network mk-flannel-734713
	I0414 13:50:33.352813 1232896 main.go:141] libmachine: (flannel-734713) DBG | I0414 13:50:33.352721 1232920 retry.go:31] will retry after 4.281011349s: waiting for domain to come up
	I0414 13:50:31.563813 1234466 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 13:50:31.563891 1234466 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 13:50:31.563915 1234466 cache.go:56] Caching tarball of preloaded images
	I0414 13:50:31.564056 1234466 preload.go:172] Found /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 13:50:31.564078 1234466 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 13:50:31.564242 1234466 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/config.json ...
	I0414 13:50:31.564277 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/config.json: {Name:mk2204b108f022f99d564aa50c55629979eef512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:31.564486 1234466 start.go:360] acquireMachinesLock for bridge-734713: {Name:mk1c744e856a4885e6214d755048926590b4b12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 13:50:37.637020 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.637866 1232896 main.go:141] libmachine: (flannel-734713) found domain IP: 192.168.72.152
	I0414 13:50:37.637896 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has current primary IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.637901 1232896 main.go:141] libmachine: (flannel-734713) reserving static IP address...
	I0414 13:50:37.638450 1232896 main.go:141] libmachine: (flannel-734713) DBG | unable to find host DHCP lease matching {name: "flannel-734713", mac: "52:54:00:9e:9a:48", ip: "192.168.72.152"} in network mk-flannel-734713
	I0414 13:50:37.754591 1232896 main.go:141] libmachine: (flannel-734713) DBG | Getting to WaitForSSH function...
	I0414 13:50:37.754621 1232896 main.go:141] libmachine: (flannel-734713) reserved static IP address 192.168.72.152 for domain flannel-734713
	I0414 13:50:37.754634 1232896 main.go:141] libmachine: (flannel-734713) waiting for SSH...
	I0414 13:50:37.758535 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.759318 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:37.759361 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.759550 1232896 main.go:141] libmachine: (flannel-734713) DBG | Using SSH client type: external
	I0414 13:50:37.759582 1232896 main.go:141] libmachine: (flannel-734713) DBG | Using SSH private key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa (-rw-------)
	I0414 13:50:37.759619 1232896 main.go:141] libmachine: (flannel-734713) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 13:50:37.759640 1232896 main.go:141] libmachine: (flannel-734713) DBG | About to run SSH command:
	I0414 13:50:37.759648 1232896 main.go:141] libmachine: (flannel-734713) DBG | exit 0
	I0414 13:50:37.892219 1232896 main.go:141] libmachine: (flannel-734713) DBG | SSH cmd err, output: <nil>: 
	I0414 13:50:37.892578 1232896 main.go:141] libmachine: (flannel-734713) KVM machine creation complete
	I0414 13:50:37.892927 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetConfigRaw
	I0414 13:50:37.893493 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:37.893697 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:37.893915 1232896 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 13:50:37.893934 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetState
	I0414 13:50:37.895553 1232896 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 13:50:37.895573 1232896 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 13:50:37.895581 1232896 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 13:50:37.895590 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:37.899289 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.899748 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:37.899782 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.900093 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:37.900351 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:37.900554 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:37.900695 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:37.900911 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:37.901234 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:37.901251 1232896 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 13:50:38.015885 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:50:38.015916 1232896 main.go:141] libmachine: Detecting the provisioner...
	I0414 13:50:38.015928 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.019947 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.020401 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.020434 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.020711 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.021012 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.021325 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.021507 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.021832 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.022086 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.022100 1232896 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 13:50:38.136844 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 13:50:38.136924 1232896 main.go:141] libmachine: found compatible host: buildroot
	I0414 13:50:38.136932 1232896 main.go:141] libmachine: Provisioning with buildroot...
	I0414 13:50:38.136941 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetMachineName
	I0414 13:50:38.137270 1232896 buildroot.go:166] provisioning hostname "flannel-734713"
	I0414 13:50:38.137308 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetMachineName
	I0414 13:50:38.137547 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.141614 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.142106 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.142144 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.142427 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.142670 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.142873 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.143110 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.143324 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.143622 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.143683 1232896 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-734713 && echo "flannel-734713" | sudo tee /etc/hostname
	I0414 13:50:38.270664 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-734713
	
	I0414 13:50:38.270700 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.274038 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.274480 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.274509 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.274796 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.275053 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.275214 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.275388 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.275567 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.275847 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.275876 1232896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-734713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-734713/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-734713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:50:38.401361 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:50:38.401397 1232896 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20384-1167927/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-1167927/.minikube}
	I0414 13:50:38.401429 1232896 buildroot.go:174] setting up certificates
	I0414 13:50:38.401441 1232896 provision.go:84] configureAuth start
	I0414 13:50:38.401451 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetMachineName
	I0414 13:50:38.401767 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:38.404744 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.405311 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.405344 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.405588 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.408468 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.408941 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.408973 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.409159 1232896 provision.go:143] copyHostCerts
	I0414 13:50:38.409231 1232896 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem, removing ...
	I0414 13:50:38.409255 1232896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem
	I0414 13:50:38.409353 1232896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem (1078 bytes)
	I0414 13:50:38.409483 1232896 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem, removing ...
	I0414 13:50:38.409494 1232896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem
	I0414 13:50:38.409521 1232896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem (1123 bytes)
	I0414 13:50:38.409584 1232896 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem, removing ...
	I0414 13:50:38.409592 1232896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem
	I0414 13:50:38.409616 1232896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem (1675 bytes)
	I0414 13:50:38.409667 1232896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem org=jenkins.flannel-734713 san=[127.0.0.1 192.168.72.152 flannel-734713 localhost minikube]
	I0414 13:50:38.622027 1232896 provision.go:177] copyRemoteCerts
	I0414 13:50:38.622101 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:50:38.622129 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.625644 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.626308 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.626341 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.626672 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.626943 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.627175 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.627360 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:38.717313 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:50:38.746438 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0414 13:50:38.774217 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 13:50:38.800993 1232896 provision.go:87] duration metric: took 399.533017ms to configureAuth
	I0414 13:50:38.801037 1232896 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:50:38.801286 1232896 config.go:182] Loaded profile config "flannel-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:50:38.801390 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.804612 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.805077 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.805108 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.805235 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.805516 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.805686 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.805838 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.806026 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.806227 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.806245 1232896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:50:39.047256 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:50:39.047322 1232896 main.go:141] libmachine: Checking connection to Docker...
	I0414 13:50:39.047335 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetURL
	I0414 13:50:39.049101 1232896 main.go:141] libmachine: (flannel-734713) DBG | using libvirt version 6000000
	I0414 13:50:39.052133 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.052668 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.052706 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.052895 1232896 main.go:141] libmachine: Docker is up and running!
	I0414 13:50:39.052917 1232896 main.go:141] libmachine: Reticulating splines...
	I0414 13:50:39.052927 1232896 client.go:171] duration metric: took 24.751714339s to LocalClient.Create
	I0414 13:50:39.052964 1232896 start.go:167] duration metric: took 24.751802794s to libmachine.API.Create "flannel-734713"
	I0414 13:50:39.052977 1232896 start.go:293] postStartSetup for "flannel-734713" (driver="kvm2")
	I0414 13:50:39.052993 1232896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:50:39.053021 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.053344 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:50:39.053380 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.056234 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.056651 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.056683 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.056948 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.057181 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.057386 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.057603 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:39.147363 1232896 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:50:39.152531 1232896 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:50:39.152565 1232896 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/addons for local assets ...
	I0414 13:50:39.152666 1232896 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/files for local assets ...
	I0414 13:50:39.152797 1232896 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem -> 11757462.pem in /etc/ssl/certs
	I0414 13:50:39.152913 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:50:39.163686 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:50:39.313206 1234466 start.go:364] duration metric: took 7.7486593s to acquireMachinesLock for "bridge-734713"
	I0414 13:50:39.313286 1234466 start.go:93] Provisioning new machine with config: &{Name:bridge-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-734713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:50:39.313465 1234466 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 13:50:39.191538 1232896 start.go:296] duration metric: took 138.53561ms for postStartSetup
	I0414 13:50:39.191625 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetConfigRaw
	I0414 13:50:39.192675 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:39.195982 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.196370 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.196403 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.196711 1232896 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/config.json ...
	I0414 13:50:39.196933 1232896 start.go:128] duration metric: took 24.920662395s to createHost
	I0414 13:50:39.196962 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.199575 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.199971 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.200009 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.200265 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.200506 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.200716 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.200836 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.201017 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:39.201235 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:39.201252 1232896 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:50:39.312992 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744638639.294895113
	
	I0414 13:50:39.313027 1232896 fix.go:216] guest clock: 1744638639.294895113
	I0414 13:50:39.313040 1232896 fix.go:229] Guest: 2025-04-14 13:50:39.294895113 +0000 UTC Remote: 2025-04-14 13:50:39.196948569 +0000 UTC m=+25.069092558 (delta=97.946544ms)
	I0414 13:50:39.313076 1232896 fix.go:200] guest clock delta is within tolerance: 97.946544ms
	I0414 13:50:39.313084 1232896 start.go:83] releasing machines lock for "flannel-734713", held for 25.03689115s
	I0414 13:50:39.313123 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.313495 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:39.316913 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.317374 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.317407 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.317648 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.318454 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.318745 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.318848 1232896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:50:39.318899 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.319070 1232896 ssh_runner.go:195] Run: cat /version.json
	I0414 13:50:39.319106 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.322650 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.322688 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.323095 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.323127 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.323152 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.323175 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.323347 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.323526 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.323628 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.323746 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.323868 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.323943 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.324058 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:39.324098 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:39.429639 1232896 ssh_runner.go:195] Run: systemctl --version
	I0414 13:50:39.437143 1232896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:50:39.610972 1232896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:50:39.618687 1232896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:50:39.618790 1232896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:50:39.638285 1232896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 13:50:39.638323 1232896 start.go:495] detecting cgroup driver to use...
	I0414 13:50:39.638408 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:50:39.657548 1232896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:50:39.672881 1232896 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:50:39.672968 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:50:39.688263 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:50:39.705072 1232896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:50:39.850153 1232896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:50:40.021507 1232896 docker.go:233] disabling docker service ...
	I0414 13:50:40.021590 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:50:40.039954 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:50:40.056200 1232896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:50:40.200612 1232896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:50:40.323358 1232896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:50:40.339938 1232896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:50:40.362920 1232896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 13:50:40.363030 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.377645 1232896 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:50:40.377729 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.389966 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.401671 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.412575 1232896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:50:40.424261 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.436885 1232896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.458584 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.471169 1232896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:50:40.481527 1232896 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 13:50:40.481614 1232896 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 13:50:40.494853 1232896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:50:40.509112 1232896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:50:40.638794 1232896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:50:40.738537 1232896 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:50:40.738626 1232896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:50:40.744588 1232896 start.go:563] Will wait 60s for crictl version
	I0414 13:50:40.744653 1232896 ssh_runner.go:195] Run: which crictl
	I0414 13:50:40.749602 1232896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:50:40.793798 1232896 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:50:40.793927 1232896 ssh_runner.go:195] Run: crio --version
	I0414 13:50:40.827010 1232896 ssh_runner.go:195] Run: crio --version
	I0414 13:50:40.862877 1232896 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 13:50:39.315866 1234466 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0414 13:50:39.316102 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:50:39.316179 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:50:39.339239 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I0414 13:50:39.340033 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:50:39.340744 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:50:39.340773 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:50:39.341299 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:50:39.341627 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetMachineName
	I0414 13:50:39.341844 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:50:39.342129 1234466 start.go:159] libmachine.API.Create for "bridge-734713" (driver="kvm2")
	I0414 13:50:39.342175 1234466 client.go:168] LocalClient.Create starting
	I0414 13:50:39.342222 1234466 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem
	I0414 13:50:39.342289 1234466 main.go:141] libmachine: Decoding PEM data...
	I0414 13:50:39.342312 1234466 main.go:141] libmachine: Parsing certificate...
	I0414 13:50:39.342402 1234466 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem
	I0414 13:50:39.342433 1234466 main.go:141] libmachine: Decoding PEM data...
	I0414 13:50:39.342446 1234466 main.go:141] libmachine: Parsing certificate...
	I0414 13:50:39.342470 1234466 main.go:141] libmachine: Running pre-create checks...
	I0414 13:50:39.342485 1234466 main.go:141] libmachine: (bridge-734713) Calling .PreCreateCheck
	I0414 13:50:39.342957 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetConfigRaw
	I0414 13:50:39.343643 1234466 main.go:141] libmachine: Creating machine...
	I0414 13:50:39.343685 1234466 main.go:141] libmachine: (bridge-734713) Calling .Create
	I0414 13:50:39.343972 1234466 main.go:141] libmachine: (bridge-734713) creating KVM machine...
	I0414 13:50:39.343990 1234466 main.go:141] libmachine: (bridge-734713) creating network...
	I0414 13:50:39.345635 1234466 main.go:141] libmachine: (bridge-734713) DBG | found existing default KVM network
	I0414 13:50:39.347502 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.347251 1234612 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:76:3b:72} reservation:<nil>}
	I0414 13:50:39.349222 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.349053 1234612 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000340000}
	I0414 13:50:39.349267 1234466 main.go:141] libmachine: (bridge-734713) DBG | created network xml: 
	I0414 13:50:39.349280 1234466 main.go:141] libmachine: (bridge-734713) DBG | <network>
	I0414 13:50:39.349301 1234466 main.go:141] libmachine: (bridge-734713) DBG |   <name>mk-bridge-734713</name>
	I0414 13:50:39.349317 1234466 main.go:141] libmachine: (bridge-734713) DBG |   <dns enable='no'/>
	I0414 13:50:39.349324 1234466 main.go:141] libmachine: (bridge-734713) DBG |   
	I0414 13:50:39.349332 1234466 main.go:141] libmachine: (bridge-734713) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0414 13:50:39.349338 1234466 main.go:141] libmachine: (bridge-734713) DBG |     <dhcp>
	I0414 13:50:39.349349 1234466 main.go:141] libmachine: (bridge-734713) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0414 13:50:39.349369 1234466 main.go:141] libmachine: (bridge-734713) DBG |     </dhcp>
	I0414 13:50:39.349383 1234466 main.go:141] libmachine: (bridge-734713) DBG |   </ip>
	I0414 13:50:39.349388 1234466 main.go:141] libmachine: (bridge-734713) DBG |   
	I0414 13:50:39.349396 1234466 main.go:141] libmachine: (bridge-734713) DBG | </network>
	I0414 13:50:39.349401 1234466 main.go:141] libmachine: (bridge-734713) DBG | 
	I0414 13:50:39.356260 1234466 main.go:141] libmachine: (bridge-734713) DBG | trying to create private KVM network mk-bridge-734713 192.168.50.0/24...
	I0414 13:50:39.446944 1234466 main.go:141] libmachine: (bridge-734713) setting up store path in /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713 ...
	I0414 13:50:39.446981 1234466 main.go:141] libmachine: (bridge-734713) DBG | private KVM network mk-bridge-734713 192.168.50.0/24 created
	I0414 13:50:39.446995 1234466 main.go:141] libmachine: (bridge-734713) building disk image from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 13:50:39.447018 1234466 main.go:141] libmachine: (bridge-734713) Downloading /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 13:50:39.447037 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.446840 1234612 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:50:39.775963 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.775805 1234612 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa...
	I0414 13:50:40.534757 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:40.534576 1234612 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/bridge-734713.rawdisk...
	I0414 13:50:40.534793 1234466 main.go:141] libmachine: (bridge-734713) DBG | Writing magic tar header
	I0414 13:50:40.534805 1234466 main.go:141] libmachine: (bridge-734713) DBG | Writing SSH key tar header
	I0414 13:50:40.534812 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:40.534739 1234612 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713 ...
	I0414 13:50:40.535006 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713
	I0414 13:50:40.535061 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines
	I0414 13:50:40.535073 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713 (perms=drwx------)
	I0414 13:50:40.535087 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:50:40.535139 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines (perms=drwxr-xr-x)
	I0414 13:50:40.535174 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927
	I0414 13:50:40.535184 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube (perms=drwxr-xr-x)
	I0414 13:50:40.535195 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927 (perms=drwxrwxr-x)
	I0414 13:50:40.535207 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 13:50:40.535236 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 13:50:40.535247 1234466 main.go:141] libmachine: (bridge-734713) creating domain...
	I0414 13:50:40.535282 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 13:50:40.535298 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins
	I0414 13:50:40.535312 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home
	I0414 13:50:40.535322 1234466 main.go:141] libmachine: (bridge-734713) DBG | skipping /home - not owner
	I0414 13:50:40.536640 1234466 main.go:141] libmachine: (bridge-734713) define libvirt domain using xml: 
	I0414 13:50:40.536667 1234466 main.go:141] libmachine: (bridge-734713) <domain type='kvm'>
	I0414 13:50:40.536679 1234466 main.go:141] libmachine: (bridge-734713)   <name>bridge-734713</name>
	I0414 13:50:40.536689 1234466 main.go:141] libmachine: (bridge-734713)   <memory unit='MiB'>3072</memory>
	I0414 13:50:40.536699 1234466 main.go:141] libmachine: (bridge-734713)   <vcpu>2</vcpu>
	I0414 13:50:40.536707 1234466 main.go:141] libmachine: (bridge-734713)   <features>
	I0414 13:50:40.536717 1234466 main.go:141] libmachine: (bridge-734713)     <acpi/>
	I0414 13:50:40.536734 1234466 main.go:141] libmachine: (bridge-734713)     <apic/>
	I0414 13:50:40.536745 1234466 main.go:141] libmachine: (bridge-734713)     <pae/>
	I0414 13:50:40.536752 1234466 main.go:141] libmachine: (bridge-734713)     
	I0414 13:50:40.536760 1234466 main.go:141] libmachine: (bridge-734713)   </features>
	I0414 13:50:40.536769 1234466 main.go:141] libmachine: (bridge-734713)   <cpu mode='host-passthrough'>
	I0414 13:50:40.536779 1234466 main.go:141] libmachine: (bridge-734713)   
	I0414 13:50:40.536788 1234466 main.go:141] libmachine: (bridge-734713)   </cpu>
	I0414 13:50:40.536799 1234466 main.go:141] libmachine: (bridge-734713)   <os>
	I0414 13:50:40.536808 1234466 main.go:141] libmachine: (bridge-734713)     <type>hvm</type>
	I0414 13:50:40.536821 1234466 main.go:141] libmachine: (bridge-734713)     <boot dev='cdrom'/>
	I0414 13:50:40.536830 1234466 main.go:141] libmachine: (bridge-734713)     <boot dev='hd'/>
	I0414 13:50:40.536842 1234466 main.go:141] libmachine: (bridge-734713)     <bootmenu enable='no'/>
	I0414 13:50:40.536847 1234466 main.go:141] libmachine: (bridge-734713)   </os>
	I0414 13:50:40.536855 1234466 main.go:141] libmachine: (bridge-734713)   <devices>
	I0414 13:50:40.536862 1234466 main.go:141] libmachine: (bridge-734713)     <disk type='file' device='cdrom'>
	I0414 13:50:40.536880 1234466 main.go:141] libmachine: (bridge-734713)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/boot2docker.iso'/>
	I0414 13:50:40.536890 1234466 main.go:141] libmachine: (bridge-734713)       <target dev='hdc' bus='scsi'/>
	I0414 13:50:40.536897 1234466 main.go:141] libmachine: (bridge-734713)       <readonly/>
	I0414 13:50:40.536906 1234466 main.go:141] libmachine: (bridge-734713)     </disk>
	I0414 13:50:40.536920 1234466 main.go:141] libmachine: (bridge-734713)     <disk type='file' device='disk'>
	I0414 13:50:40.536933 1234466 main.go:141] libmachine: (bridge-734713)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 13:50:40.536948 1234466 main.go:141] libmachine: (bridge-734713)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/bridge-734713.rawdisk'/>
	I0414 13:50:40.536958 1234466 main.go:141] libmachine: (bridge-734713)       <target dev='hda' bus='virtio'/>
	I0414 13:50:40.536973 1234466 main.go:141] libmachine: (bridge-734713)     </disk>
	I0414 13:50:40.536983 1234466 main.go:141] libmachine: (bridge-734713)     <interface type='network'>
	I0414 13:50:40.536990 1234466 main.go:141] libmachine: (bridge-734713)       <source network='mk-bridge-734713'/>
	I0414 13:50:40.537006 1234466 main.go:141] libmachine: (bridge-734713)       <model type='virtio'/>
	I0414 13:50:40.537018 1234466 main.go:141] libmachine: (bridge-734713)     </interface>
	I0414 13:50:40.537028 1234466 main.go:141] libmachine: (bridge-734713)     <interface type='network'>
	I0414 13:50:40.537036 1234466 main.go:141] libmachine: (bridge-734713)       <source network='default'/>
	I0414 13:50:40.537045 1234466 main.go:141] libmachine: (bridge-734713)       <model type='virtio'/>
	I0414 13:50:40.537053 1234466 main.go:141] libmachine: (bridge-734713)     </interface>
	I0414 13:50:40.537063 1234466 main.go:141] libmachine: (bridge-734713)     <serial type='pty'>
	I0414 13:50:40.537071 1234466 main.go:141] libmachine: (bridge-734713)       <target port='0'/>
	I0414 13:50:40.537082 1234466 main.go:141] libmachine: (bridge-734713)     </serial>
	I0414 13:50:40.537094 1234466 main.go:141] libmachine: (bridge-734713)     <console type='pty'>
	I0414 13:50:40.537105 1234466 main.go:141] libmachine: (bridge-734713)       <target type='serial' port='0'/>
	I0414 13:50:40.537112 1234466 main.go:141] libmachine: (bridge-734713)     </console>
	I0414 13:50:40.537121 1234466 main.go:141] libmachine: (bridge-734713)     <rng model='virtio'>
	I0414 13:50:40.537129 1234466 main.go:141] libmachine: (bridge-734713)       <backend model='random'>/dev/random</backend>
	I0414 13:50:40.537138 1234466 main.go:141] libmachine: (bridge-734713)     </rng>
	I0414 13:50:40.537146 1234466 main.go:141] libmachine: (bridge-734713)     
	I0414 13:50:40.537151 1234466 main.go:141] libmachine: (bridge-734713)     
	I0414 13:50:40.537158 1234466 main.go:141] libmachine: (bridge-734713)   </devices>
	I0414 13:50:40.537168 1234466 main.go:141] libmachine: (bridge-734713) </domain>
	I0414 13:50:40.537180 1234466 main.go:141] libmachine: (bridge-734713) 
	I0414 13:50:40.542293 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:a2:c5:c3 in network default
	I0414 13:50:40.543155 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:40.543181 1234466 main.go:141] libmachine: (bridge-734713) starting domain...
	I0414 13:50:40.543192 1234466 main.go:141] libmachine: (bridge-734713) ensuring networks are active...
	I0414 13:50:40.544085 1234466 main.go:141] libmachine: (bridge-734713) Ensuring network default is active
	I0414 13:50:40.544503 1234466 main.go:141] libmachine: (bridge-734713) Ensuring network mk-bridge-734713 is active
	I0414 13:50:40.545224 1234466 main.go:141] libmachine: (bridge-734713) getting domain XML...
	I0414 13:50:40.546220 1234466 main.go:141] libmachine: (bridge-734713) creating domain...
	I0414 13:50:40.864611 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:40.872238 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:40.872889 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:40.872945 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:40.873296 1232896 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 13:50:40.878238 1232896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:50:40.893229 1232896 kubeadm.go:883] updating cluster {Name:flannel-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-734713
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:50:40.893411 1232896 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 13:50:40.893473 1232896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:50:40.927159 1232896 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 13:50:40.927257 1232896 ssh_runner.go:195] Run: which lz4
	I0414 13:50:40.931385 1232896 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:50:40.935992 1232896 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:50:40.936041 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 13:50:42.508560 1232896 crio.go:462] duration metric: took 1.577211857s to copy over tarball
	I0414 13:50:42.508755 1232896 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:50:42.186131 1234466 main.go:141] libmachine: (bridge-734713) waiting for IP...
	I0414 13:50:42.187150 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:42.187900 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:42.188021 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:42.187885 1234612 retry.go:31] will retry after 209.280153ms: waiting for domain to come up
	I0414 13:50:42.400953 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:42.401780 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:42.401812 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:42.401758 1234612 retry.go:31] will retry after 258.587195ms: waiting for domain to come up
	I0414 13:50:42.662535 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:42.663254 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:42.663301 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:42.663223 1234612 retry.go:31] will retry after 447.059078ms: waiting for domain to come up
	I0414 13:50:43.112050 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:43.112698 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:43.112729 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:43.112651 1234612 retry.go:31] will retry after 509.754419ms: waiting for domain to come up
	I0414 13:50:43.624778 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:43.625482 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:43.625535 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:43.625448 1234612 retry.go:31] will retry after 623.011152ms: waiting for domain to come up
	I0414 13:50:44.250093 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:44.250644 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:44.250686 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:44.250534 1234612 retry.go:31] will retry after 764.557829ms: waiting for domain to come up
	I0414 13:50:45.017538 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:45.018426 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:45.018451 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:45.018313 1234612 retry.go:31] will retry after 968.96203ms: waiting for domain to come up
	I0414 13:50:45.989225 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:45.990298 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:45.990328 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:45.990229 1234612 retry.go:31] will retry after 918.990856ms: waiting for domain to come up
	I0414 13:50:48.524155 1223410 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 13:50:48.524328 1223410 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 13:50:48.525904 1223410 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 13:50:48.525995 1223410 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:50:48.526105 1223410 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:50:48.526269 1223410 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:50:48.526421 1223410 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 13:50:48.526514 1223410 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:50:48.528418 1223410 out.go:235]   - Generating certificates and keys ...
	I0414 13:50:48.528530 1223410 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:50:48.528624 1223410 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:50:48.528765 1223410 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 13:50:48.528871 1223410 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 13:50:48.528983 1223410 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 13:50:48.529064 1223410 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 13:50:48.529155 1223410 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 13:50:48.529254 1223410 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 13:50:48.529417 1223410 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 13:50:48.529560 1223410 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 13:50:48.529604 1223410 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 13:50:48.529704 1223410 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:50:48.529789 1223410 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:50:48.529839 1223410 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:50:48.529919 1223410 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:50:48.530000 1223410 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:50:48.530167 1223410 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:50:48.530286 1223410 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:50:48.530362 1223410 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:50:48.530461 1223410 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:50:45.310453 1232896 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.801647363s)
	I0414 13:50:45.310493 1232896 crio.go:469] duration metric: took 2.801895924s to extract the tarball
	I0414 13:50:45.310504 1232896 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:50:45.356652 1232896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:50:45.406626 1232896 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 13:50:45.406661 1232896 cache_images.go:84] Images are preloaded, skipping loading
	I0414 13:50:45.406670 1232896 kubeadm.go:934] updating node { 192.168.72.152 8443 v1.32.2 crio true true} ...
	I0414 13:50:45.406815 1232896 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-734713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-734713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0414 13:50:45.406909 1232896 ssh_runner.go:195] Run: crio config
	I0414 13:50:45.461461 1232896 cni.go:84] Creating CNI manager for "flannel"
	I0414 13:50:45.461488 1232896 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:50:45.461513 1232896 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.152 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-734713 NodeName:flannel-734713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 13:50:45.461635 1232896 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-734713"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.152"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.152"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:50:45.461707 1232896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 13:50:45.473005 1232896 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:50:45.473087 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:50:45.485045 1232896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0414 13:50:45.506983 1232896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:50:45.530421 1232896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0414 13:50:45.555515 1232896 ssh_runner.go:195] Run: grep 192.168.72.152	control-plane.minikube.internal$ /etc/hosts
	I0414 13:50:45.560505 1232896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.152	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:50:45.575551 1232896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:50:45.720221 1232896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:50:45.747368 1232896 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713 for IP: 192.168.72.152
	I0414 13:50:45.747403 1232896 certs.go:194] generating shared ca certs ...
	I0414 13:50:45.747430 1232896 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:45.747707 1232896 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 13:50:45.747811 1232896 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 13:50:45.747835 1232896 certs.go:256] generating profile certs ...
	I0414 13:50:45.747918 1232896 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.key
	I0414 13:50:45.747937 1232896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt with IP's: []
	I0414 13:50:45.922380 1232896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt ...
	I0414 13:50:45.922422 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: {Name:mk1de736571f3f8c7d352cc6b2b670d2f7a3f166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:45.922639 1232896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.key ...
	I0414 13:50:45.922655 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.key: {Name:mk2f6eadeffd7c817852e1cf122fbd49307e71e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:45.922754 1232896 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f
	I0414 13:50:45.922778 1232896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.152]
	I0414 13:50:46.110029 1232896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f ...
	I0414 13:50:46.110069 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f: {Name:mk87d3d3dae2adc62fb3b924b2cc7bd153bf0895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.110292 1232896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f ...
	I0414 13:50:46.110311 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f: {Name:mka9b50ff1a0a846f6aad9c4e3e0e6a306ad6a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.110419 1232896 certs.go:381] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt
	I0414 13:50:46.110518 1232896 certs.go:385] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key
	I0414 13:50:46.110597 1232896 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key
	I0414 13:50:46.110628 1232896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt with IP's: []
	I0414 13:50:46.456186 1232896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt ...
	I0414 13:50:46.456227 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt: {Name:mkfb4bf081c9c81523a9ab1a930bbd9a48e04eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.456431 1232896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key ...
	I0414 13:50:46.456457 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key: {Name:mk555b1a424c2e8f038bac59e6d58cf02d051438 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.456681 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem (1338 bytes)
	W0414 13:50:46.456722 1232896 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746_empty.pem, impossibly tiny 0 bytes
	I0414 13:50:46.456734 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:50:46.456765 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:50:46.456797 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:50:46.456827 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 13:50:46.456889 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:50:46.457527 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:50:46.487196 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 13:50:46.515169 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:50:46.539007 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:50:46.565526 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 13:50:46.592839 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 13:50:46.620546 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:50:46.650296 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 13:50:46.677899 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:50:46.706202 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem --> /usr/share/ca-certificates/1175746.pem (1338 bytes)
	I0414 13:50:46.734768 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /usr/share/ca-certificates/11757462.pem (1708 bytes)
	I0414 13:50:46.760248 1232896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:50:46.779528 1232896 ssh_runner.go:195] Run: openssl version
	I0414 13:50:46.787229 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:50:46.799578 1232896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:50:46.805310 1232896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:50:46.805402 1232896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:50:46.812411 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:50:46.825822 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1175746.pem && ln -fs /usr/share/ca-certificates/1175746.pem /etc/ssl/certs/1175746.pem"
	I0414 13:50:46.838323 1232896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1175746.pem
	I0414 13:50:46.844286 1232896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:27 /usr/share/ca-certificates/1175746.pem
	I0414 13:50:46.844403 1232896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1175746.pem
	I0414 13:50:46.851198 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1175746.pem /etc/ssl/certs/51391683.0"
	I0414 13:50:46.864017 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11757462.pem && ln -fs /usr/share/ca-certificates/11757462.pem /etc/ssl/certs/11757462.pem"
	I0414 13:50:46.875865 1232896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11757462.pem
	I0414 13:50:46.881178 1232896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:27 /usr/share/ca-certificates/11757462.pem
	I0414 13:50:46.881257 1232896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11757462.pem
	I0414 13:50:46.890423 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11757462.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:50:46.909837 1232896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:50:46.915589 1232896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 13:50:46.915675 1232896 kubeadm.go:392] StartCluster: {Name:flannel-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-734713 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:50:46.915774 1232896 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:50:46.915834 1232896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:50:46.954926 1232896 cri.go:89] found id: ""
	I0414 13:50:46.955033 1232896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:50:46.966456 1232896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:50:46.977813 1232896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:50:46.989696 1232896 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:50:46.989730 1232896 kubeadm.go:157] found existing configuration files:
	
	I0414 13:50:46.989792 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:50:47.000961 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:50:47.001039 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:50:47.012218 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:50:47.023049 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:50:47.023138 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:50:47.033513 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:50:47.045033 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:50:47.045111 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:50:47.055536 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:50:47.065805 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:50:47.065884 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:50:47.078066 1232896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:50:47.261431 1232896 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:50:48.532385 1223410 out.go:235]   - Booting up control plane ...
	I0414 13:50:48.532556 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:50:48.532689 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:50:48.532768 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:50:48.532843 1223410 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:50:48.533084 1223410 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 13:50:48.533159 1223410 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 13:50:48.533265 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.533525 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.533594 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.533814 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.533912 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534108 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534172 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534394 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534516 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534801 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534821 1223410 kubeadm.go:310] 
	I0414 13:50:48.534885 1223410 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 13:50:48.534955 1223410 kubeadm.go:310] 		timed out waiting for the condition
	I0414 13:50:48.534967 1223410 kubeadm.go:310] 
	I0414 13:50:48.535000 1223410 kubeadm.go:310] 	This error is likely caused by:
	I0414 13:50:48.535047 1223410 kubeadm.go:310] 		- The kubelet is not running
	I0414 13:50:48.535180 1223410 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 13:50:48.535194 1223410 kubeadm.go:310] 
	I0414 13:50:48.535371 1223410 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 13:50:48.535439 1223410 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 13:50:48.535500 1223410 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 13:50:48.535585 1223410 kubeadm.go:310] 
	I0414 13:50:48.535769 1223410 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 13:50:48.535905 1223410 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 13:50:48.535922 1223410 kubeadm.go:310] 
	I0414 13:50:48.536089 1223410 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 13:50:48.536225 1223410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 13:50:48.536329 1223410 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 13:50:48.536413 1223410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 13:50:48.536508 1223410 kubeadm.go:310] 
	I0414 13:50:48.536517 1223410 kubeadm.go:394] duration metric: took 8m1.284425887s to StartCluster
	I0414 13:50:48.536575 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:50:48.536648 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:50:48.585550 1223410 cri.go:89] found id: ""
	I0414 13:50:48.585590 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.585601 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:50:48.585609 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:50:48.585672 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:50:48.626898 1223410 cri.go:89] found id: ""
	I0414 13:50:48.626928 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.626940 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:50:48.626948 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:50:48.627009 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:50:48.670274 1223410 cri.go:89] found id: ""
	I0414 13:50:48.670317 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.670330 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:50:48.670338 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:50:48.670411 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:50:48.720563 1223410 cri.go:89] found id: ""
	I0414 13:50:48.720600 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.720611 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:50:48.720619 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:50:48.720686 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:50:48.767764 1223410 cri.go:89] found id: ""
	I0414 13:50:48.767799 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.767807 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:50:48.767814 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:50:48.767866 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:50:48.818486 1223410 cri.go:89] found id: ""
	I0414 13:50:48.818531 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.818544 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:50:48.818553 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:50:48.818619 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:50:48.867564 1223410 cri.go:89] found id: ""
	I0414 13:50:48.867644 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.867692 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:50:48.867706 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:50:48.867774 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:50:48.906916 1223410 cri.go:89] found id: ""
	I0414 13:50:48.906950 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.906958 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:50:48.906971 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:50:48.906988 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:50:48.955626 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:50:48.955683 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:50:49.022469 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:50:49.022525 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:50:49.041402 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:50:49.041449 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:50:49.131342 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:50:49.131373 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:50:49.131392 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0414 13:50:49.248634 1223410 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 13:50:49.248726 1223410 out.go:270] * 
	W0414 13:50:49.248809 1223410 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:50:49.248828 1223410 out.go:270] * 
	W0414 13:50:49.249735 1223410 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 13:50:49.253971 1223410 out.go:201] 
	W0414 13:50:49.255696 1223410 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:50:49.255776 1223410 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 13:50:49.255807 1223410 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 13:50:49.257975 1223410 out.go:201] 
	
	
	==> CRI-O <==
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.480590617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744638650480546897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1cce272b-ed31-455c-bab8-3067fb288234 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.481534254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=196f8cf5-5a10-427f-87d8-6e8a32c14244 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.481638802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=196f8cf5-5a10-427f-87d8-6e8a32c14244 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.481702844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=196f8cf5-5a10-427f-87d8-6e8a32c14244 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.532428939Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90aa163b-47bf-43db-8cdc-59276449a2ae name=/runtime.v1.RuntimeService/Version
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.532589465Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90aa163b-47bf-43db-8cdc-59276449a2ae name=/runtime.v1.RuntimeService/Version
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.534679372Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34582f0e-8397-46e0-9f1c-fed07a0d4056 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.535324843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744638650535284415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34582f0e-8397-46e0-9f1c-fed07a0d4056 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.536408226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae4ed755-6c9e-4088-ac48-600b5c420dc4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.536554718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae4ed755-6c9e-4088-ac48-600b5c420dc4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.536619943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ae4ed755-6c9e-4088-ac48-600b5c420dc4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.621434513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3453c64-9dc2-46d6-82b3-ce8abbe7639a name=/runtime.v1.RuntimeService/Version
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.621658640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3453c64-9dc2-46d6-82b3-ce8abbe7639a name=/runtime.v1.RuntimeService/Version
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.623551149Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=902b6393-2ded-4d4a-8097-5a8e9ff054fe name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.624433243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744638650624395492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=902b6393-2ded-4d4a-8097-5a8e9ff054fe name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.625314084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c314bbe4-c91f-478c-b8df-5bc7d0bbddf6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.625417792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c314bbe4-c91f-478c-b8df-5bc7d0bbddf6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.625510370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c314bbe4-c91f-478c-b8df-5bc7d0bbddf6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.678904184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d85c6c9-54ad-4a9e-991f-6dd462e4d9eb name=/runtime.v1.RuntimeService/Version
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.679003086Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d85c6c9-54ad-4a9e-991f-6dd462e4d9eb name=/runtime.v1.RuntimeService/Version
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.681406406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=906b9f17-8b25-4c93-b8aa-7ed67fbed354 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.682366795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744638650682326755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=906b9f17-8b25-4c93-b8aa-7ed67fbed354 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.683948477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4af743b-d388-4076-9e5d-bed4d9acb0b9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.684041442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4af743b-d388-4076-9e5d-bed4d9acb0b9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:50:50 old-k8s-version-966509 crio[626]: time="2025-04-14 13:50:50.684170559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f4af743b-d388-4076-9e5d-bed4d9acb0b9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 13:42] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052303] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037424] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.184043] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.220918] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.632782] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.648390] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.065849] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072695] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.193720] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.145562] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.255078] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +8.069901] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.072646] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.342677] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +8.500421] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 13:46] systemd-fstab-generator[4889]: Ignoring "noauto" option for root device
	[Apr14 13:48] systemd-fstab-generator[5166]: Ignoring "noauto" option for root device
	[  +0.062342] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:50:50 up 8 min,  0 users,  load average: 0.24, 0.16, 0.09
	Linux old-k8s-version-966509 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: net.(*sysDialer).dialTCP(0xc000ae2e80, 0x4f7fe40, 0xc000aeb9e0, 0x0, 0xc000afb650, 0x57b620, 0x48ab5d6, 0x7f344932a1c0)
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: net.(*sysDialer).dialSingle(0xc000ae2e80, 0x4f7fe40, 0xc000aeb9e0, 0x4f1ff00, 0xc000afb650, 0x0, 0x0, 0x0, 0x0)
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: net.(*sysDialer).dialSerial(0xc000ae2e80, 0x4f7fe40, 0xc000aeb9e0, 0xc000adf190, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]:         /usr/local/go/src/net/dial.go:548 +0x152
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: net.(*Dialer).DialContext(0xc000aa0f00, 0x4f7fe00, 0xc000128018, 0x48ab5d6, 0x3, 0xc000adc900, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000ab2320, 0x4f7fe00, 0xc000128018, 0x48ab5d6, 0x3, 0xc000adc900, 0x24, 0x60, 0x7f344932a578, 0x118, ...)
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: net/http.(*Transport).dial(0xc00046c500, 0x4f7fe00, 0xc000128018, 0x48ab5d6, 0x3, 0xc000adc900, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: net/http.(*Transport).dialConn(0xc00046c500, 0x4f7fe00, 0xc000128018, 0x0, 0xc000388480, 0x5, 0xc000adc900, 0x24, 0x0, 0xc000ad1c20, ...)
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: net/http.(*Transport).dialConnFor(0xc00046c500, 0xc000aae790)
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: created by net/http.(*Transport).queueForDial
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: goroutine 166 [select]:
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000aeb9e0, 0xc000ae2f00, 0xc000ae5920, 0xc000ae58c0)
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]: created by net.(*netFD).connect
	Apr 14 13:50:50 old-k8s-version-966509 kubelet[5344]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Apr 14 13:50:50 old-k8s-version-966509 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 14 13:50:50 old-k8s-version-966509 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-966509 -n old-k8s-version-966509
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 2 (392.934189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-966509" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (516.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:51:00.246673 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:51:20.218382 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:51:49.053043 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:52:22.168833 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:52:40.312202 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:52:40.318711 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:52:40.330293 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:52:40.351809 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:52:40.393397 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:52:40.475036 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:52:40.636555 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:52:40.958523 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:52:41.600777 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:52:42.882417 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:52:45.444736 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:52:50.566922 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:53:00.809060 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:53:18.440385 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:53:21.290551 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:53:46.145454 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:53:51.046205 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:51.052733 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:51.064329 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:51.085951 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:51.127515 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:51.209128 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:51.370851 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:53:51.692755 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:53:52.335084 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:53:53.616791 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:53:56.178710 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:01.300122 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:02.252568 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:11.541522 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:32.023972 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:37.319439 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:37.325916 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:37.337512 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:37.359223 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:37.400849 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:37.482479 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:37.644277 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:37.966331 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:38.305388 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:38.608645 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:39.891110 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:42.453045 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:47.575504 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:57.146647 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:57.818015 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:59.124091 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:59.130842 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:59.142555 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:59.164363 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:59.206028 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:59.287700 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:54:59.449541 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:54:59.771521 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:55:00.412923 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:01.694434 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:04.255924 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:06.011060 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:09.378427 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:12.985488 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:18.299509 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:19.620369 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:24.175079 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:33.994711 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:55:34.001240 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:55:34.012858 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:55:34.034503 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:55:34.076135 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:55:34.157780 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:55:34.319515 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:34.641419 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:55:35.283698 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:36.565536 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:39.127316 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:40.101907 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:44.249772 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:54.491940 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:55:59.261006 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:14.973315 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:17.920246 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:17.927017 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:17.938620 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:17.960201 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:18.001785 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:18.083436 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:18.245193 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:18.566658 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:19.208476 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:20.490568 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:21.063922 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:23.052400 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:28.174505 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:34.907747 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:38.416973 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:43.136442 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:43.143044 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:43.154608 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:43.176208 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:43.217816 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:43.299491 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:43.461267 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:43.783159 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:56:44.424513 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:45.706167 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:48.268554 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:49.052634 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:53.390664 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:55.934928 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:56:58.899040 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:57:03.633097 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:57:21.182518 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:57:24.114472 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:57:39.860682 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:57:40.311813 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:57:42.986306 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:58:05.076804 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:58:08.017299 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:58:17.857139 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:58:18.439990 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:58:51.046300 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:59:01.783005 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:59:18.749336 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:59:26.998780 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:59:37.320048 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:59:38.305858 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-966509 -n old-k8s-version-966509
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 2 (263.561715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-966509" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 2 (251.151237ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-966509 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-734713 sudo iptables                       | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo docker                         | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo find                           | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo crio                           | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-734713                                     | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 13:50:31
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 13:50:31.474135 1234466 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:50:31.474253 1234466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:50:31.474257 1234466 out.go:358] Setting ErrFile to fd 2...
	I0414 13:50:31.474262 1234466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:50:31.474520 1234466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:50:31.475288 1234466 out.go:352] Setting JSON to false
	I0414 13:50:31.477061 1234466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19979,"bootTime":1744618653,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:50:31.477154 1234466 start.go:139] virtualization: kvm guest
	I0414 13:50:31.479607 1234466 out.go:177] * [bridge-734713] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:50:31.481863 1234466 notify.go:220] Checking for updates...
	I0414 13:50:31.481878 1234466 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 13:50:31.483700 1234466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:50:31.485289 1234466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:50:31.487251 1234466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:50:31.489524 1234466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:50:31.491617 1234466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:50:31.494384 1234466 config.go:182] Loaded profile config "enable-default-cni-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:50:31.494599 1234466 config.go:182] Loaded profile config "flannel-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:50:31.494768 1234466 config.go:182] Loaded profile config "old-k8s-version-966509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 13:50:31.494943 1234466 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:50:31.538765 1234466 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 13:50:31.540246 1234466 start.go:297] selected driver: kvm2
	I0414 13:50:31.540269 1234466 start.go:901] validating driver "kvm2" against <nil>
	I0414 13:50:31.540283 1234466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:50:31.541164 1234466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:50:31.541264 1234466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20384-1167927/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 13:50:31.559397 1234466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 13:50:31.559459 1234466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 13:50:31.559769 1234466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:50:31.559813 1234466 cni.go:84] Creating CNI manager for "bridge"
	I0414 13:50:31.559821 1234466 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 13:50:31.559887 1234466 start.go:340] cluster config:
	{Name:bridge-734713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-734713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:50:31.560014 1234466 iso.go:125] acquiring lock: {Name:mk8f809cb461c81c5ffa481f4cedc4ad92252720 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:50:31.562179 1234466 out.go:177] * Starting "bridge-734713" primary control-plane node in "bridge-734713" cluster
	I0414 13:50:29.334946 1231023 pod_ready.go:103] pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace has status "Ready":"False"
	I0414 13:50:31.833321 1231023 pod_ready.go:98] pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:31 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.69 HostIPs:[{IP:192.168.39.
69}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 13:50:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 13:50:21 +0000 UTC,FinishedAt:2025-04-14 13:50:31 +0000 UTC,ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3 Started:0xc000545230 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0007929a0} {Name:kube-api-access-gz8ls MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0007929b0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 13:50:31.833367 1231023 pod_ready.go:82] duration metric: took 12.006367856s for pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace to be "Ready" ...
	E0414 13:50:31.833383 1231023 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:31 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.69 HostIPs:[{IP:192.168.39.69}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 13:50:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 13:50:21 +0000 UTC,FinishedAt:2025-04-14 13:50:31 +0000 UTC,ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3 Started:0xc000545230 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0007929a0} {Name:kube-api-access-gz8ls MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0007929b0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 13:50:31.833400 1231023 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-wc7z8" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.838889 1231023 pod_ready.go:93] pod "coredns-668d6bf9bc-wc7z8" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.838917 1231023 pod_ready.go:82] duration metric: took 5.507401ms for pod "coredns-668d6bf9bc-wc7z8" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.838931 1231023 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.846654 1231023 pod_ready.go:93] pod "etcd-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.846680 1231023 pod_ready.go:82] duration metric: took 7.739982ms for pod "etcd-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.846693 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.851573 1231023 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.851599 1231023 pod_ready.go:82] duration metric: took 4.900716ms for pod "kube-apiserver-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.851610 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.861178 1231023 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.861205 1231023 pod_ready.go:82] duration metric: took 9.588121ms for pod "kube-controller-manager-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.861215 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-9w89x" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.231339 1231023 pod_ready.go:93] pod "kube-proxy-9w89x" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:32.231363 1231023 pod_ready.go:82] duration metric: took 370.139759ms for pod "kube-proxy-9w89x" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.231373 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.630989 1231023 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:32.631015 1231023 pod_ready.go:82] duration metric: took 399.636056ms for pod "kube-scheduler-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.631024 1231023 pod_ready.go:39] duration metric: took 12.810229756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:50:32.631043 1231023 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:50:32.631107 1231023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:50:32.645651 1231023 api_server.go:72] duration metric: took 13.222143925s to wait for apiserver process to appear ...
	I0414 13:50:32.645687 1231023 api_server.go:88] waiting for apiserver healthz status ...
	I0414 13:50:32.645709 1231023 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I0414 13:50:32.651253 1231023 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I0414 13:50:32.652492 1231023 api_server.go:141] control plane version: v1.32.2
	I0414 13:50:32.652525 1231023 api_server.go:131] duration metric: took 6.829312ms to wait for apiserver health ...
	I0414 13:50:32.652539 1231023 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 13:50:32.832476 1231023 system_pods.go:59] 7 kube-system pods found
	I0414 13:50:32.832516 1231023 system_pods.go:61] "coredns-668d6bf9bc-wc7z8" [0f79f746-c943-4e60-a284-492bb981e61f] Running
	I0414 13:50:32.832522 1231023 system_pods.go:61] "etcd-enable-default-cni-734713" [19510a8c-d3ce-4d2d-ae16-913bbdf644aa] Running
	I0414 13:50:32.832527 1231023 system_pods.go:61] "kube-apiserver-enable-default-cni-734713" [adb74c7b-1709-4a20-b0af-e71ceff39a2c] Running
	I0414 13:50:32.832531 1231023 system_pods.go:61] "kube-controller-manager-enable-default-cni-734713" [1ac3ccc9-9356-4859-a021-41a7adf3620d] Running
	I0414 13:50:32.832534 1231023 system_pods.go:61] "kube-proxy-9w89x" [ced87d7f-cfc0-4474-bbff-273bf081d028] Running
	I0414 13:50:32.832539 1231023 system_pods.go:61] "kube-scheduler-enable-default-cni-734713" [5be3eb8d-6845-4526-b0bd-e870fa09ab3d] Running
	I0414 13:50:32.832542 1231023 system_pods.go:61] "storage-provisioner" [a321a108-c3a8-47b6-bfaa-c32b99f04b1e] Running
	I0414 13:50:32.832548 1231023 system_pods.go:74] duration metric: took 180.003646ms to wait for pod list to return data ...
	I0414 13:50:32.832556 1231023 default_sa.go:34] waiting for default service account to be created ...
	I0414 13:50:33.031788 1231023 default_sa.go:45] found service account: "default"
	I0414 13:50:33.031827 1231023 default_sa.go:55] duration metric: took 199.260003ms for default service account to be created ...
	I0414 13:50:33.031842 1231023 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 13:50:33.232342 1231023 system_pods.go:86] 7 kube-system pods found
	I0414 13:50:33.232377 1231023 system_pods.go:89] "coredns-668d6bf9bc-wc7z8" [0f79f746-c943-4e60-a284-492bb981e61f] Running
	I0414 13:50:33.232383 1231023 system_pods.go:89] "etcd-enable-default-cni-734713" [19510a8c-d3ce-4d2d-ae16-913bbdf644aa] Running
	I0414 13:50:33.232387 1231023 system_pods.go:89] "kube-apiserver-enable-default-cni-734713" [adb74c7b-1709-4a20-b0af-e71ceff39a2c] Running
	I0414 13:50:33.232391 1231023 system_pods.go:89] "kube-controller-manager-enable-default-cni-734713" [1ac3ccc9-9356-4859-a021-41a7adf3620d] Running
	I0414 13:50:33.232395 1231023 system_pods.go:89] "kube-proxy-9w89x" [ced87d7f-cfc0-4474-bbff-273bf081d028] Running
	I0414 13:50:33.232399 1231023 system_pods.go:89] "kube-scheduler-enable-default-cni-734713" [5be3eb8d-6845-4526-b0bd-e870fa09ab3d] Running
	I0414 13:50:33.232402 1231023 system_pods.go:89] "storage-provisioner" [a321a108-c3a8-47b6-bfaa-c32b99f04b1e] Running
	I0414 13:50:33.232408 1231023 system_pods.go:126] duration metric: took 200.561466ms to wait for k8s-apps to be running ...
	I0414 13:50:33.232415 1231023 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 13:50:33.232464 1231023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:50:33.247725 1231023 system_svc.go:56] duration metric: took 15.294005ms WaitForService to wait for kubelet
	I0414 13:50:33.247763 1231023 kubeadm.go:582] duration metric: took 13.824265507s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:50:33.247796 1231023 node_conditions.go:102] verifying NodePressure condition ...
	I0414 13:50:33.431949 1231023 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 13:50:33.431992 1231023 node_conditions.go:123] node cpu capacity is 2
	I0414 13:50:33.432010 1231023 node_conditions.go:105] duration metric: took 184.207615ms to run NodePressure ...
	I0414 13:50:33.432026 1231023 start.go:241] waiting for startup goroutines ...
	I0414 13:50:33.432036 1231023 start.go:246] waiting for cluster config update ...
	I0414 13:50:33.432076 1231023 start.go:255] writing updated cluster config ...
	I0414 13:50:33.432420 1231023 ssh_runner.go:195] Run: rm -f paused
	I0414 13:50:33.493793 1231023 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 13:50:33.496245 1231023 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-734713" cluster and "default" namespace by default
	I0414 13:50:29.976954 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:29.978228 1232896 main.go:141] libmachine: (flannel-734713) DBG | unable to find current IP address of domain flannel-734713 in network mk-flannel-734713
	I0414 13:50:29.978298 1232896 main.go:141] libmachine: (flannel-734713) DBG | I0414 13:50:29.977978 1232920 retry.go:31] will retry after 3.36970346s: waiting for domain to come up
	I0414 13:50:33.352083 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:33.352787 1232896 main.go:141] libmachine: (flannel-734713) DBG | unable to find current IP address of domain flannel-734713 in network mk-flannel-734713
	I0414 13:50:33.352813 1232896 main.go:141] libmachine: (flannel-734713) DBG | I0414 13:50:33.352721 1232920 retry.go:31] will retry after 4.281011349s: waiting for domain to come up
	I0414 13:50:31.563813 1234466 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 13:50:31.563891 1234466 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 13:50:31.563915 1234466 cache.go:56] Caching tarball of preloaded images
	I0414 13:50:31.564056 1234466 preload.go:172] Found /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 13:50:31.564078 1234466 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 13:50:31.564242 1234466 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/config.json ...
	I0414 13:50:31.564277 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/config.json: {Name:mk2204b108f022f99d564aa50c55629979eef512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:31.564486 1234466 start.go:360] acquireMachinesLock for bridge-734713: {Name:mk1c744e856a4885e6214d755048926590b4b12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 13:50:37.637020 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.637866 1232896 main.go:141] libmachine: (flannel-734713) found domain IP: 192.168.72.152
	I0414 13:50:37.637896 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has current primary IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.637901 1232896 main.go:141] libmachine: (flannel-734713) reserving static IP address...
	I0414 13:50:37.638450 1232896 main.go:141] libmachine: (flannel-734713) DBG | unable to find host DHCP lease matching {name: "flannel-734713", mac: "52:54:00:9e:9a:48", ip: "192.168.72.152"} in network mk-flannel-734713
	I0414 13:50:37.754591 1232896 main.go:141] libmachine: (flannel-734713) DBG | Getting to WaitForSSH function...
	I0414 13:50:37.754621 1232896 main.go:141] libmachine: (flannel-734713) reserved static IP address 192.168.72.152 for domain flannel-734713
	I0414 13:50:37.754634 1232896 main.go:141] libmachine: (flannel-734713) waiting for SSH...
	I0414 13:50:37.758535 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.759318 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:37.759361 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.759550 1232896 main.go:141] libmachine: (flannel-734713) DBG | Using SSH client type: external
	I0414 13:50:37.759582 1232896 main.go:141] libmachine: (flannel-734713) DBG | Using SSH private key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa (-rw-------)
	I0414 13:50:37.759619 1232896 main.go:141] libmachine: (flannel-734713) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 13:50:37.759640 1232896 main.go:141] libmachine: (flannel-734713) DBG | About to run SSH command:
	I0414 13:50:37.759648 1232896 main.go:141] libmachine: (flannel-734713) DBG | exit 0
	I0414 13:50:37.892219 1232896 main.go:141] libmachine: (flannel-734713) DBG | SSH cmd err, output: <nil>: 
	I0414 13:50:37.892578 1232896 main.go:141] libmachine: (flannel-734713) KVM machine creation complete
	I0414 13:50:37.892927 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetConfigRaw
	I0414 13:50:37.893493 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:37.893697 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:37.893915 1232896 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 13:50:37.893934 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetState
	I0414 13:50:37.895553 1232896 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 13:50:37.895573 1232896 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 13:50:37.895581 1232896 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 13:50:37.895590 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:37.899289 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.899748 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:37.899782 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.900093 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:37.900351 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:37.900554 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:37.900695 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:37.900911 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:37.901234 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:37.901251 1232896 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 13:50:38.015885 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:50:38.015916 1232896 main.go:141] libmachine: Detecting the provisioner...
	I0414 13:50:38.015928 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.019947 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.020401 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.020434 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.020711 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.021012 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.021325 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.021507 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.021832 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.022086 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.022100 1232896 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 13:50:38.136844 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 13:50:38.136924 1232896 main.go:141] libmachine: found compatible host: buildroot
	I0414 13:50:38.136932 1232896 main.go:141] libmachine: Provisioning with buildroot...
	I0414 13:50:38.136941 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetMachineName
	I0414 13:50:38.137270 1232896 buildroot.go:166] provisioning hostname "flannel-734713"
	I0414 13:50:38.137308 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetMachineName
	I0414 13:50:38.137547 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.141614 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.142106 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.142144 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.142427 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.142670 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.142873 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.143110 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.143324 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.143622 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.143683 1232896 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-734713 && echo "flannel-734713" | sudo tee /etc/hostname
	I0414 13:50:38.270664 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-734713
	
	I0414 13:50:38.270700 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.274038 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.274480 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.274509 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.274796 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.275053 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.275214 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.275388 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.275567 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.275847 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.275876 1232896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-734713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-734713/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-734713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:50:38.401361 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:50:38.401397 1232896 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20384-1167927/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-1167927/.minikube}
	I0414 13:50:38.401429 1232896 buildroot.go:174] setting up certificates
	I0414 13:50:38.401441 1232896 provision.go:84] configureAuth start
	I0414 13:50:38.401451 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetMachineName
	I0414 13:50:38.401767 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:38.404744 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.405311 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.405344 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.405588 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.408468 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.408941 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.408973 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.409159 1232896 provision.go:143] copyHostCerts
	I0414 13:50:38.409231 1232896 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem, removing ...
	I0414 13:50:38.409255 1232896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem
	I0414 13:50:38.409353 1232896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem (1078 bytes)
	I0414 13:50:38.409483 1232896 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem, removing ...
	I0414 13:50:38.409494 1232896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem
	I0414 13:50:38.409521 1232896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem (1123 bytes)
	I0414 13:50:38.409584 1232896 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem, removing ...
	I0414 13:50:38.409592 1232896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem
	I0414 13:50:38.409616 1232896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem (1675 bytes)
	I0414 13:50:38.409667 1232896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem org=jenkins.flannel-734713 san=[127.0.0.1 192.168.72.152 flannel-734713 localhost minikube]
	I0414 13:50:38.622027 1232896 provision.go:177] copyRemoteCerts
	I0414 13:50:38.622101 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:50:38.622129 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.625644 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.626308 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.626341 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.626672 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.626943 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.627175 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.627360 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:38.717313 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:50:38.746438 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0414 13:50:38.774217 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 13:50:38.800993 1232896 provision.go:87] duration metric: took 399.533017ms to configureAuth
	I0414 13:50:38.801037 1232896 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:50:38.801286 1232896 config.go:182] Loaded profile config "flannel-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:50:38.801390 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.804612 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.805077 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.805108 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.805235 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.805516 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.805686 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.805838 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.806026 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.806227 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.806245 1232896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:50:39.047256 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:50:39.047322 1232896 main.go:141] libmachine: Checking connection to Docker...
	I0414 13:50:39.047335 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetURL
	I0414 13:50:39.049101 1232896 main.go:141] libmachine: (flannel-734713) DBG | using libvirt version 6000000
	I0414 13:50:39.052133 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.052668 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.052706 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.052895 1232896 main.go:141] libmachine: Docker is up and running!
	I0414 13:50:39.052917 1232896 main.go:141] libmachine: Reticulating splines...
	I0414 13:50:39.052927 1232896 client.go:171] duration metric: took 24.751714339s to LocalClient.Create
	I0414 13:50:39.052964 1232896 start.go:167] duration metric: took 24.751802794s to libmachine.API.Create "flannel-734713"
	I0414 13:50:39.052977 1232896 start.go:293] postStartSetup for "flannel-734713" (driver="kvm2")
	I0414 13:50:39.052993 1232896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:50:39.053021 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.053344 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:50:39.053380 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.056234 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.056651 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.056683 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.056948 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.057181 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.057386 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.057603 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:39.147363 1232896 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:50:39.152531 1232896 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:50:39.152565 1232896 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/addons for local assets ...
	I0414 13:50:39.152666 1232896 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/files for local assets ...
	I0414 13:50:39.152797 1232896 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem -> 11757462.pem in /etc/ssl/certs
	I0414 13:50:39.152913 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:50:39.163686 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:50:39.313206 1234466 start.go:364] duration metric: took 7.7486593s to acquireMachinesLock for "bridge-734713"
	I0414 13:50:39.313286 1234466 start.go:93] Provisioning new machine with config: &{Name:bridge-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-734713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:50:39.313465 1234466 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 13:50:39.191538 1232896 start.go:296] duration metric: took 138.53561ms for postStartSetup
	I0414 13:50:39.191625 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetConfigRaw
	I0414 13:50:39.192675 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:39.195982 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.196370 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.196403 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.196711 1232896 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/config.json ...
	I0414 13:50:39.196933 1232896 start.go:128] duration metric: took 24.920662395s to createHost
	I0414 13:50:39.196962 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.199575 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.199971 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.200009 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.200265 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.200506 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.200716 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.200836 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.201017 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:39.201235 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:39.201252 1232896 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:50:39.312992 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744638639.294895113
	
	I0414 13:50:39.313027 1232896 fix.go:216] guest clock: 1744638639.294895113
	I0414 13:50:39.313040 1232896 fix.go:229] Guest: 2025-04-14 13:50:39.294895113 +0000 UTC Remote: 2025-04-14 13:50:39.196948569 +0000 UTC m=+25.069092558 (delta=97.946544ms)
	I0414 13:50:39.313076 1232896 fix.go:200] guest clock delta is within tolerance: 97.946544ms
	I0414 13:50:39.313084 1232896 start.go:83] releasing machines lock for "flannel-734713", held for 25.03689115s
	I0414 13:50:39.313123 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.313495 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:39.316913 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.317374 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.317407 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.317648 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.318454 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.318745 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.318848 1232896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:50:39.318899 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.319070 1232896 ssh_runner.go:195] Run: cat /version.json
	I0414 13:50:39.319106 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.322650 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.322688 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.323095 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.323127 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.323152 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.323175 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.323347 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.323526 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.323628 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.323746 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.323868 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.323943 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.324058 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:39.324098 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:39.429639 1232896 ssh_runner.go:195] Run: systemctl --version
	I0414 13:50:39.437143 1232896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:50:39.610972 1232896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:50:39.618687 1232896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:50:39.618790 1232896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:50:39.638285 1232896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 13:50:39.638323 1232896 start.go:495] detecting cgroup driver to use...
	I0414 13:50:39.638408 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:50:39.657548 1232896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:50:39.672881 1232896 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:50:39.672968 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:50:39.688263 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:50:39.705072 1232896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:50:39.850153 1232896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:50:40.021507 1232896 docker.go:233] disabling docker service ...
	I0414 13:50:40.021590 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:50:40.039954 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:50:40.056200 1232896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:50:40.200612 1232896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:50:40.323358 1232896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:50:40.339938 1232896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:50:40.362920 1232896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 13:50:40.363030 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.377645 1232896 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:50:40.377729 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.389966 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.401671 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.412575 1232896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:50:40.424261 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.436885 1232896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.458584 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.471169 1232896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:50:40.481527 1232896 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 13:50:40.481614 1232896 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 13:50:40.494853 1232896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:50:40.509112 1232896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:50:40.638794 1232896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:50:40.738537 1232896 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:50:40.738626 1232896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:50:40.744588 1232896 start.go:563] Will wait 60s for crictl version
	I0414 13:50:40.744653 1232896 ssh_runner.go:195] Run: which crictl
	I0414 13:50:40.749602 1232896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:50:40.793798 1232896 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:50:40.793927 1232896 ssh_runner.go:195] Run: crio --version
	I0414 13:50:40.827010 1232896 ssh_runner.go:195] Run: crio --version
	I0414 13:50:40.862877 1232896 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 13:50:39.315866 1234466 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0414 13:50:39.316102 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:50:39.316179 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:50:39.339239 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I0414 13:50:39.340033 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:50:39.340744 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:50:39.340773 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:50:39.341299 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:50:39.341627 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetMachineName
	I0414 13:50:39.341844 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:50:39.342129 1234466 start.go:159] libmachine.API.Create for "bridge-734713" (driver="kvm2")
	I0414 13:50:39.342175 1234466 client.go:168] LocalClient.Create starting
	I0414 13:50:39.342222 1234466 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem
	I0414 13:50:39.342289 1234466 main.go:141] libmachine: Decoding PEM data...
	I0414 13:50:39.342312 1234466 main.go:141] libmachine: Parsing certificate...
	I0414 13:50:39.342402 1234466 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem
	I0414 13:50:39.342433 1234466 main.go:141] libmachine: Decoding PEM data...
	I0414 13:50:39.342446 1234466 main.go:141] libmachine: Parsing certificate...
	I0414 13:50:39.342470 1234466 main.go:141] libmachine: Running pre-create checks...
	I0414 13:50:39.342485 1234466 main.go:141] libmachine: (bridge-734713) Calling .PreCreateCheck
	I0414 13:50:39.342957 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetConfigRaw
	I0414 13:50:39.343643 1234466 main.go:141] libmachine: Creating machine...
	I0414 13:50:39.343685 1234466 main.go:141] libmachine: (bridge-734713) Calling .Create
	I0414 13:50:39.343972 1234466 main.go:141] libmachine: (bridge-734713) creating KVM machine...
	I0414 13:50:39.343990 1234466 main.go:141] libmachine: (bridge-734713) creating network...
	I0414 13:50:39.345635 1234466 main.go:141] libmachine: (bridge-734713) DBG | found existing default KVM network
	I0414 13:50:39.347502 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.347251 1234612 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:76:3b:72} reservation:<nil>}
	I0414 13:50:39.349222 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.349053 1234612 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000340000}
	I0414 13:50:39.349267 1234466 main.go:141] libmachine: (bridge-734713) DBG | created network xml: 
	I0414 13:50:39.349280 1234466 main.go:141] libmachine: (bridge-734713) DBG | <network>
	I0414 13:50:39.349301 1234466 main.go:141] libmachine: (bridge-734713) DBG |   <name>mk-bridge-734713</name>
	I0414 13:50:39.349317 1234466 main.go:141] libmachine: (bridge-734713) DBG |   <dns enable='no'/>
	I0414 13:50:39.349324 1234466 main.go:141] libmachine: (bridge-734713) DBG |   
	I0414 13:50:39.349332 1234466 main.go:141] libmachine: (bridge-734713) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0414 13:50:39.349338 1234466 main.go:141] libmachine: (bridge-734713) DBG |     <dhcp>
	I0414 13:50:39.349349 1234466 main.go:141] libmachine: (bridge-734713) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0414 13:50:39.349369 1234466 main.go:141] libmachine: (bridge-734713) DBG |     </dhcp>
	I0414 13:50:39.349383 1234466 main.go:141] libmachine: (bridge-734713) DBG |   </ip>
	I0414 13:50:39.349388 1234466 main.go:141] libmachine: (bridge-734713) DBG |   
	I0414 13:50:39.349396 1234466 main.go:141] libmachine: (bridge-734713) DBG | </network>
	I0414 13:50:39.349401 1234466 main.go:141] libmachine: (bridge-734713) DBG | 
	I0414 13:50:39.356260 1234466 main.go:141] libmachine: (bridge-734713) DBG | trying to create private KVM network mk-bridge-734713 192.168.50.0/24...
	I0414 13:50:39.446944 1234466 main.go:141] libmachine: (bridge-734713) setting up store path in /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713 ...
	I0414 13:50:39.446981 1234466 main.go:141] libmachine: (bridge-734713) DBG | private KVM network mk-bridge-734713 192.168.50.0/24 created
	I0414 13:50:39.446995 1234466 main.go:141] libmachine: (bridge-734713) building disk image from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 13:50:39.447018 1234466 main.go:141] libmachine: (bridge-734713) Downloading /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 13:50:39.447037 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.446840 1234612 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:50:39.775963 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.775805 1234612 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa...
	I0414 13:50:40.534757 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:40.534576 1234612 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/bridge-734713.rawdisk...
	I0414 13:50:40.534793 1234466 main.go:141] libmachine: (bridge-734713) DBG | Writing magic tar header
	I0414 13:50:40.534805 1234466 main.go:141] libmachine: (bridge-734713) DBG | Writing SSH key tar header
	I0414 13:50:40.534812 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:40.534739 1234612 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713 ...
	I0414 13:50:40.535006 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713
	I0414 13:50:40.535061 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines
	I0414 13:50:40.535073 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713 (perms=drwx------)
	I0414 13:50:40.535087 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:50:40.535139 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines (perms=drwxr-xr-x)
	I0414 13:50:40.535174 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927
	I0414 13:50:40.535184 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube (perms=drwxr-xr-x)
	I0414 13:50:40.535195 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927 (perms=drwxrwxr-x)
	I0414 13:50:40.535207 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 13:50:40.535236 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 13:50:40.535247 1234466 main.go:141] libmachine: (bridge-734713) creating domain...
	I0414 13:50:40.535282 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 13:50:40.535298 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins
	I0414 13:50:40.535312 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home
	I0414 13:50:40.535322 1234466 main.go:141] libmachine: (bridge-734713) DBG | skipping /home - not owner
	I0414 13:50:40.536640 1234466 main.go:141] libmachine: (bridge-734713) define libvirt domain using xml: 
	I0414 13:50:40.536667 1234466 main.go:141] libmachine: (bridge-734713) <domain type='kvm'>
	I0414 13:50:40.536679 1234466 main.go:141] libmachine: (bridge-734713)   <name>bridge-734713</name>
	I0414 13:50:40.536689 1234466 main.go:141] libmachine: (bridge-734713)   <memory unit='MiB'>3072</memory>
	I0414 13:50:40.536699 1234466 main.go:141] libmachine: (bridge-734713)   <vcpu>2</vcpu>
	I0414 13:50:40.536707 1234466 main.go:141] libmachine: (bridge-734713)   <features>
	I0414 13:50:40.536717 1234466 main.go:141] libmachine: (bridge-734713)     <acpi/>
	I0414 13:50:40.536734 1234466 main.go:141] libmachine: (bridge-734713)     <apic/>
	I0414 13:50:40.536745 1234466 main.go:141] libmachine: (bridge-734713)     <pae/>
	I0414 13:50:40.536752 1234466 main.go:141] libmachine: (bridge-734713)     
	I0414 13:50:40.536760 1234466 main.go:141] libmachine: (bridge-734713)   </features>
	I0414 13:50:40.536769 1234466 main.go:141] libmachine: (bridge-734713)   <cpu mode='host-passthrough'>
	I0414 13:50:40.536779 1234466 main.go:141] libmachine: (bridge-734713)   
	I0414 13:50:40.536788 1234466 main.go:141] libmachine: (bridge-734713)   </cpu>
	I0414 13:50:40.536799 1234466 main.go:141] libmachine: (bridge-734713)   <os>
	I0414 13:50:40.536808 1234466 main.go:141] libmachine: (bridge-734713)     <type>hvm</type>
	I0414 13:50:40.536821 1234466 main.go:141] libmachine: (bridge-734713)     <boot dev='cdrom'/>
	I0414 13:50:40.536830 1234466 main.go:141] libmachine: (bridge-734713)     <boot dev='hd'/>
	I0414 13:50:40.536842 1234466 main.go:141] libmachine: (bridge-734713)     <bootmenu enable='no'/>
	I0414 13:50:40.536847 1234466 main.go:141] libmachine: (bridge-734713)   </os>
	I0414 13:50:40.536855 1234466 main.go:141] libmachine: (bridge-734713)   <devices>
	I0414 13:50:40.536862 1234466 main.go:141] libmachine: (bridge-734713)     <disk type='file' device='cdrom'>
	I0414 13:50:40.536880 1234466 main.go:141] libmachine: (bridge-734713)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/boot2docker.iso'/>
	I0414 13:50:40.536890 1234466 main.go:141] libmachine: (bridge-734713)       <target dev='hdc' bus='scsi'/>
	I0414 13:50:40.536897 1234466 main.go:141] libmachine: (bridge-734713)       <readonly/>
	I0414 13:50:40.536906 1234466 main.go:141] libmachine: (bridge-734713)     </disk>
	I0414 13:50:40.536920 1234466 main.go:141] libmachine: (bridge-734713)     <disk type='file' device='disk'>
	I0414 13:50:40.536933 1234466 main.go:141] libmachine: (bridge-734713)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 13:50:40.536948 1234466 main.go:141] libmachine: (bridge-734713)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/bridge-734713.rawdisk'/>
	I0414 13:50:40.536958 1234466 main.go:141] libmachine: (bridge-734713)       <target dev='hda' bus='virtio'/>
	I0414 13:50:40.536973 1234466 main.go:141] libmachine: (bridge-734713)     </disk>
	I0414 13:50:40.536983 1234466 main.go:141] libmachine: (bridge-734713)     <interface type='network'>
	I0414 13:50:40.536990 1234466 main.go:141] libmachine: (bridge-734713)       <source network='mk-bridge-734713'/>
	I0414 13:50:40.537006 1234466 main.go:141] libmachine: (bridge-734713)       <model type='virtio'/>
	I0414 13:50:40.537018 1234466 main.go:141] libmachine: (bridge-734713)     </interface>
	I0414 13:50:40.537028 1234466 main.go:141] libmachine: (bridge-734713)     <interface type='network'>
	I0414 13:50:40.537036 1234466 main.go:141] libmachine: (bridge-734713)       <source network='default'/>
	I0414 13:50:40.537045 1234466 main.go:141] libmachine: (bridge-734713)       <model type='virtio'/>
	I0414 13:50:40.537053 1234466 main.go:141] libmachine: (bridge-734713)     </interface>
	I0414 13:50:40.537063 1234466 main.go:141] libmachine: (bridge-734713)     <serial type='pty'>
	I0414 13:50:40.537071 1234466 main.go:141] libmachine: (bridge-734713)       <target port='0'/>
	I0414 13:50:40.537082 1234466 main.go:141] libmachine: (bridge-734713)     </serial>
	I0414 13:50:40.537094 1234466 main.go:141] libmachine: (bridge-734713)     <console type='pty'>
	I0414 13:50:40.537105 1234466 main.go:141] libmachine: (bridge-734713)       <target type='serial' port='0'/>
	I0414 13:50:40.537112 1234466 main.go:141] libmachine: (bridge-734713)     </console>
	I0414 13:50:40.537121 1234466 main.go:141] libmachine: (bridge-734713)     <rng model='virtio'>
	I0414 13:50:40.537129 1234466 main.go:141] libmachine: (bridge-734713)       <backend model='random'>/dev/random</backend>
	I0414 13:50:40.537138 1234466 main.go:141] libmachine: (bridge-734713)     </rng>
	I0414 13:50:40.537146 1234466 main.go:141] libmachine: (bridge-734713)     
	I0414 13:50:40.537151 1234466 main.go:141] libmachine: (bridge-734713)     
	I0414 13:50:40.537158 1234466 main.go:141] libmachine: (bridge-734713)   </devices>
	I0414 13:50:40.537168 1234466 main.go:141] libmachine: (bridge-734713) </domain>
	I0414 13:50:40.537180 1234466 main.go:141] libmachine: (bridge-734713) 
	I0414 13:50:40.542293 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:a2:c5:c3 in network default
	I0414 13:50:40.543155 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:40.543181 1234466 main.go:141] libmachine: (bridge-734713) starting domain...
	I0414 13:50:40.543192 1234466 main.go:141] libmachine: (bridge-734713) ensuring networks are active...
	I0414 13:50:40.544085 1234466 main.go:141] libmachine: (bridge-734713) Ensuring network default is active
	I0414 13:50:40.544503 1234466 main.go:141] libmachine: (bridge-734713) Ensuring network mk-bridge-734713 is active
	I0414 13:50:40.545224 1234466 main.go:141] libmachine: (bridge-734713) getting domain XML...
	I0414 13:50:40.546220 1234466 main.go:141] libmachine: (bridge-734713) creating domain...
	I0414 13:50:40.864611 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:40.872238 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:40.872889 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:40.872945 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:40.873296 1232896 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 13:50:40.878238 1232896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:50:40.893229 1232896 kubeadm.go:883] updating cluster {Name:flannel-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-734713
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:50:40.893411 1232896 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 13:50:40.893473 1232896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:50:40.927159 1232896 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 13:50:40.927257 1232896 ssh_runner.go:195] Run: which lz4
	I0414 13:50:40.931385 1232896 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:50:40.935992 1232896 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:50:40.936041 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 13:50:42.508560 1232896 crio.go:462] duration metric: took 1.577211857s to copy over tarball
	I0414 13:50:42.508755 1232896 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:50:42.186131 1234466 main.go:141] libmachine: (bridge-734713) waiting for IP...
	I0414 13:50:42.187150 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:42.187900 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:42.188021 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:42.187885 1234612 retry.go:31] will retry after 209.280153ms: waiting for domain to come up
	I0414 13:50:42.400953 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:42.401780 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:42.401812 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:42.401758 1234612 retry.go:31] will retry after 258.587195ms: waiting for domain to come up
	I0414 13:50:42.662535 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:42.663254 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:42.663301 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:42.663223 1234612 retry.go:31] will retry after 447.059078ms: waiting for domain to come up
	I0414 13:50:43.112050 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:43.112698 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:43.112729 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:43.112651 1234612 retry.go:31] will retry after 509.754419ms: waiting for domain to come up
	I0414 13:50:43.624778 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:43.625482 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:43.625535 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:43.625448 1234612 retry.go:31] will retry after 623.011152ms: waiting for domain to come up
	I0414 13:50:44.250093 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:44.250644 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:44.250686 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:44.250534 1234612 retry.go:31] will retry after 764.557829ms: waiting for domain to come up
	I0414 13:50:45.017538 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:45.018426 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:45.018451 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:45.018313 1234612 retry.go:31] will retry after 968.96203ms: waiting for domain to come up
	I0414 13:50:45.989225 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:45.990298 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:45.990328 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:45.990229 1234612 retry.go:31] will retry after 918.990856ms: waiting for domain to come up
	I0414 13:50:48.524155 1223410 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 13:50:48.524328 1223410 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 13:50:48.525904 1223410 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 13:50:48.525995 1223410 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:50:48.526105 1223410 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:50:48.526269 1223410 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:50:48.526421 1223410 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 13:50:48.526514 1223410 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:50:48.528418 1223410 out.go:235]   - Generating certificates and keys ...
	I0414 13:50:48.528530 1223410 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:50:48.528624 1223410 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:50:48.528765 1223410 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 13:50:48.528871 1223410 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 13:50:48.528983 1223410 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 13:50:48.529064 1223410 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 13:50:48.529155 1223410 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 13:50:48.529254 1223410 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 13:50:48.529417 1223410 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 13:50:48.529560 1223410 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 13:50:48.529604 1223410 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 13:50:48.529704 1223410 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:50:48.529789 1223410 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:50:48.529839 1223410 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:50:48.529919 1223410 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:50:48.530000 1223410 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:50:48.530167 1223410 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:50:48.530286 1223410 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:50:48.530362 1223410 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:50:48.530461 1223410 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:50:45.310453 1232896 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.801647363s)
	I0414 13:50:45.310493 1232896 crio.go:469] duration metric: took 2.801895924s to extract the tarball
	I0414 13:50:45.310504 1232896 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:50:45.356652 1232896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:50:45.406626 1232896 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 13:50:45.406661 1232896 cache_images.go:84] Images are preloaded, skipping loading
	I0414 13:50:45.406670 1232896 kubeadm.go:934] updating node { 192.168.72.152 8443 v1.32.2 crio true true} ...
	I0414 13:50:45.406815 1232896 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-734713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-734713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0414 13:50:45.406909 1232896 ssh_runner.go:195] Run: crio config
	I0414 13:50:45.461461 1232896 cni.go:84] Creating CNI manager for "flannel"
	I0414 13:50:45.461488 1232896 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:50:45.461513 1232896 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.152 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-734713 NodeName:flannel-734713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 13:50:45.461635 1232896 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-734713"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.152"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.152"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:50:45.461707 1232896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 13:50:45.473005 1232896 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:50:45.473087 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:50:45.485045 1232896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0414 13:50:45.506983 1232896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:50:45.530421 1232896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0414 13:50:45.555515 1232896 ssh_runner.go:195] Run: grep 192.168.72.152	control-plane.minikube.internal$ /etc/hosts
	I0414 13:50:45.560505 1232896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.152	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:50:45.575551 1232896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:50:45.720221 1232896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:50:45.747368 1232896 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713 for IP: 192.168.72.152
	I0414 13:50:45.747403 1232896 certs.go:194] generating shared ca certs ...
	I0414 13:50:45.747430 1232896 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:45.747707 1232896 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 13:50:45.747811 1232896 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 13:50:45.747835 1232896 certs.go:256] generating profile certs ...
	I0414 13:50:45.747918 1232896 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.key
	I0414 13:50:45.747937 1232896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt with IP's: []
	I0414 13:50:45.922380 1232896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt ...
	I0414 13:50:45.922422 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: {Name:mk1de736571f3f8c7d352cc6b2b670d2f7a3f166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:45.922639 1232896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.key ...
	I0414 13:50:45.922655 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.key: {Name:mk2f6eadeffd7c817852e1cf122fbd49307e71e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:45.922754 1232896 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f
	I0414 13:50:45.922778 1232896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.152]
	I0414 13:50:46.110029 1232896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f ...
	I0414 13:50:46.110069 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f: {Name:mk87d3d3dae2adc62fb3b924b2cc7bd153bf0895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.110292 1232896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f ...
	I0414 13:50:46.110311 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f: {Name:mka9b50ff1a0a846f6aad9c4e3e0e6a306ad6a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.110419 1232896 certs.go:381] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt
	I0414 13:50:46.110518 1232896 certs.go:385] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key
	I0414 13:50:46.110597 1232896 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key
	I0414 13:50:46.110628 1232896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt with IP's: []
	I0414 13:50:46.456186 1232896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt ...
	I0414 13:50:46.456227 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt: {Name:mkfb4bf081c9c81523a9ab1a930bbd9a48e04eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.456431 1232896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key ...
	I0414 13:50:46.456457 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key: {Name:mk555b1a424c2e8f038bac59e6d58cf02d051438 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.456681 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem (1338 bytes)
	W0414 13:50:46.456722 1232896 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746_empty.pem, impossibly tiny 0 bytes
	I0414 13:50:46.456734 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:50:46.456765 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:50:46.456797 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:50:46.456827 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 13:50:46.456889 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:50:46.457527 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:50:46.487196 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 13:50:46.515169 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:50:46.539007 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:50:46.565526 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 13:50:46.592839 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 13:50:46.620546 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:50:46.650296 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 13:50:46.677899 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:50:46.706202 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem --> /usr/share/ca-certificates/1175746.pem (1338 bytes)
	I0414 13:50:46.734768 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /usr/share/ca-certificates/11757462.pem (1708 bytes)
	I0414 13:50:46.760248 1232896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:50:46.779528 1232896 ssh_runner.go:195] Run: openssl version
	I0414 13:50:46.787229 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:50:46.799578 1232896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:50:46.805310 1232896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:50:46.805402 1232896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:50:46.812411 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:50:46.825822 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1175746.pem && ln -fs /usr/share/ca-certificates/1175746.pem /etc/ssl/certs/1175746.pem"
	I0414 13:50:46.838323 1232896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1175746.pem
	I0414 13:50:46.844286 1232896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:27 /usr/share/ca-certificates/1175746.pem
	I0414 13:50:46.844403 1232896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1175746.pem
	I0414 13:50:46.851198 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1175746.pem /etc/ssl/certs/51391683.0"
	I0414 13:50:46.864017 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11757462.pem && ln -fs /usr/share/ca-certificates/11757462.pem /etc/ssl/certs/11757462.pem"
	I0414 13:50:46.875865 1232896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11757462.pem
	I0414 13:50:46.881178 1232896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:27 /usr/share/ca-certificates/11757462.pem
	I0414 13:50:46.881257 1232896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11757462.pem
	I0414 13:50:46.890423 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11757462.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:50:46.909837 1232896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:50:46.915589 1232896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 13:50:46.915675 1232896 kubeadm.go:392] StartCluster: {Name:flannel-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-734713 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:50:46.915774 1232896 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:50:46.915834 1232896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:50:46.954926 1232896 cri.go:89] found id: ""
	I0414 13:50:46.955033 1232896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:50:46.966456 1232896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:50:46.977813 1232896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:50:46.989696 1232896 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:50:46.989730 1232896 kubeadm.go:157] found existing configuration files:
	
	I0414 13:50:46.989792 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:50:47.000961 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:50:47.001039 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:50:47.012218 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:50:47.023049 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:50:47.023138 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:50:47.033513 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:50:47.045033 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:50:47.045111 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:50:47.055536 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:50:47.065805 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:50:47.065884 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:50:47.078066 1232896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:50:47.261431 1232896 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:50:48.532385 1223410 out.go:235]   - Booting up control plane ...
	I0414 13:50:48.532556 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:50:48.532689 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:50:48.532768 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:50:48.532843 1223410 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:50:48.533084 1223410 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 13:50:48.533159 1223410 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 13:50:48.533265 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.533525 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.533594 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.533814 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.533912 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534108 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534172 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534394 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534516 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534801 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534821 1223410 kubeadm.go:310] 
	I0414 13:50:48.534885 1223410 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 13:50:48.534955 1223410 kubeadm.go:310] 		timed out waiting for the condition
	I0414 13:50:48.534967 1223410 kubeadm.go:310] 
	I0414 13:50:48.535000 1223410 kubeadm.go:310] 	This error is likely caused by:
	I0414 13:50:48.535047 1223410 kubeadm.go:310] 		- The kubelet is not running
	I0414 13:50:48.535180 1223410 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 13:50:48.535194 1223410 kubeadm.go:310] 
	I0414 13:50:48.535371 1223410 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 13:50:48.535439 1223410 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 13:50:48.535500 1223410 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 13:50:48.535585 1223410 kubeadm.go:310] 
	I0414 13:50:48.535769 1223410 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 13:50:48.535905 1223410 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 13:50:48.535922 1223410 kubeadm.go:310] 
	I0414 13:50:48.536089 1223410 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 13:50:48.536225 1223410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 13:50:48.536329 1223410 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 13:50:48.536413 1223410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 13:50:48.536508 1223410 kubeadm.go:310] 
	I0414 13:50:48.536517 1223410 kubeadm.go:394] duration metric: took 8m1.284425887s to StartCluster
	I0414 13:50:48.536575 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:50:48.536648 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:50:48.585550 1223410 cri.go:89] found id: ""
	I0414 13:50:48.585590 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.585601 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:50:48.585609 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:50:48.585672 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:50:48.626898 1223410 cri.go:89] found id: ""
	I0414 13:50:48.626928 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.626940 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:50:48.626948 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:50:48.627009 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:50:48.670274 1223410 cri.go:89] found id: ""
	I0414 13:50:48.670317 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.670330 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:50:48.670338 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:50:48.670411 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:50:48.720563 1223410 cri.go:89] found id: ""
	I0414 13:50:48.720600 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.720611 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:50:48.720619 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:50:48.720686 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:50:48.767764 1223410 cri.go:89] found id: ""
	I0414 13:50:48.767799 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.767807 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:50:48.767814 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:50:48.767866 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:50:48.818486 1223410 cri.go:89] found id: ""
	I0414 13:50:48.818531 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.818544 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:50:48.818553 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:50:48.818619 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:50:48.867564 1223410 cri.go:89] found id: ""
	I0414 13:50:48.867644 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.867692 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:50:48.867706 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:50:48.867774 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:50:48.906916 1223410 cri.go:89] found id: ""
	I0414 13:50:48.906950 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.906958 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:50:48.906971 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:50:48.906988 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:50:48.955626 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:50:48.955683 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:50:49.022469 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:50:49.022525 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:50:49.041402 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:50:49.041449 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:50:49.131342 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:50:49.131373 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:50:49.131392 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0414 13:50:49.248634 1223410 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 13:50:49.248726 1223410 out.go:270] * 
	W0414 13:50:49.248809 1223410 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:50:49.248828 1223410 out.go:270] * 
	W0414 13:50:49.249735 1223410 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 13:50:49.253971 1223410 out.go:201] 
	W0414 13:50:49.255696 1223410 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:50:49.255776 1223410 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 13:50:49.255807 1223410 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 13:50:49.257975 1223410 out.go:201] 
	I0414 13:50:46.910593 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:46.911338 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:46.911363 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:46.911230 1234612 retry.go:31] will retry after 1.155366589s: waiting for domain to come up
	I0414 13:50:48.068077 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:48.068834 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:48.068863 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:48.068780 1234612 retry.go:31] will retry after 1.700089826s: waiting for domain to come up
	I0414 13:50:49.770330 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:49.771048 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:49.771117 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:49.771036 1234612 retry.go:31] will retry after 2.036657651s: waiting for domain to come up
	I0414 13:50:51.808884 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:51.809332 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:51.809415 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:51.809319 1234612 retry.go:31] will retry after 3.172888858s: waiting for domain to come up
	I0414 13:50:54.984140 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:54.984848 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:54.984880 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:54.984820 1234612 retry.go:31] will retry after 4.057631495s: waiting for domain to come up
	I0414 13:50:58.428436 1232896 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 13:50:58.428548 1232896 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:50:58.428668 1232896 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:50:58.428807 1232896 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:50:58.428971 1232896 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 13:50:58.429065 1232896 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:50:58.431307 1232896 out.go:235]   - Generating certificates and keys ...
	I0414 13:50:58.431422 1232896 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:50:58.431510 1232896 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:50:58.431611 1232896 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 13:50:58.431695 1232896 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 13:50:58.431780 1232896 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 13:50:58.431855 1232896 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 13:50:58.431934 1232896 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 13:50:58.432114 1232896 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-734713 localhost] and IPs [192.168.72.152 127.0.0.1 ::1]
	I0414 13:50:58.432187 1232896 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 13:50:58.432364 1232896 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-734713 localhost] and IPs [192.168.72.152 127.0.0.1 ::1]
	I0414 13:50:58.432465 1232896 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 13:50:58.432542 1232896 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 13:50:58.432605 1232896 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 13:50:58.432690 1232896 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:50:58.432825 1232896 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:50:58.432927 1232896 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 13:50:58.433007 1232896 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:50:58.433096 1232896 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:50:58.433167 1232896 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:50:58.433305 1232896 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:50:58.433402 1232896 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:50:58.435843 1232896 out.go:235]   - Booting up control plane ...
	I0414 13:50:58.435973 1232896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:50:58.436178 1232896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:50:58.436292 1232896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:50:58.436424 1232896 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:50:58.436547 1232896 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:50:58.436611 1232896 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:50:58.436782 1232896 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 13:50:58.436910 1232896 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 13:50:58.437037 1232896 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.700673ms
	I0414 13:50:58.437164 1232896 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 13:50:58.437273 1232896 kubeadm.go:310] [api-check] The API server is healthy after 5.503941955s
	I0414 13:50:58.437430 1232896 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 13:50:58.437619 1232896 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 13:50:58.437710 1232896 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 13:50:58.437991 1232896 kubeadm.go:310] [mark-control-plane] Marking the node flannel-734713 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 13:50:58.438083 1232896 kubeadm.go:310] [bootstrap-token] Using token: 6zgao3.i3fwzwxvba12lxq2
	I0414 13:50:58.440456 1232896 out.go:235]   - Configuring RBAC rules ...
	I0414 13:50:58.440634 1232896 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 13:50:58.440821 1232896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 13:50:58.440956 1232896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 13:50:58.441103 1232896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 13:50:58.441240 1232896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 13:50:58.441334 1232896 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 13:50:58.441474 1232896 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 13:50:58.441518 1232896 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 13:50:58.441604 1232896 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 13:50:58.441692 1232896 kubeadm.go:310] 
	I0414 13:50:58.441832 1232896 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 13:50:58.441843 1232896 kubeadm.go:310] 
	I0414 13:50:58.441954 1232896 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 13:50:58.441965 1232896 kubeadm.go:310] 
	I0414 13:50:58.442011 1232896 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 13:50:58.442121 1232896 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 13:50:58.442202 1232896 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 13:50:58.442214 1232896 kubeadm.go:310] 
	I0414 13:50:58.442300 1232896 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 13:50:58.442311 1232896 kubeadm.go:310] 
	I0414 13:50:58.442366 1232896 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 13:50:58.442378 1232896 kubeadm.go:310] 
	I0414 13:50:58.442448 1232896 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 13:50:58.442549 1232896 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 13:50:58.442644 1232896 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 13:50:58.442652 1232896 kubeadm.go:310] 
	I0414 13:50:58.442759 1232896 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 13:50:58.442858 1232896 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 13:50:58.442866 1232896 kubeadm.go:310] 
	I0414 13:50:58.442978 1232896 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6zgao3.i3fwzwxvba12lxq2 \
	I0414 13:50:58.443136 1232896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5f2612fbe124e36b509339bfdb73251340c2c4faa099e85bd5300a27bddf90e9 \
	I0414 13:50:58.443184 1232896 kubeadm.go:310] 	--control-plane 
	I0414 13:50:58.443200 1232896 kubeadm.go:310] 
	I0414 13:50:58.443310 1232896 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 13:50:58.443319 1232896 kubeadm.go:310] 
	I0414 13:50:58.443447 1232896 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6zgao3.i3fwzwxvba12lxq2 \
	I0414 13:50:58.443624 1232896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5f2612fbe124e36b509339bfdb73251340c2c4faa099e85bd5300a27bddf90e9 
	I0414 13:50:58.443646 1232896 cni.go:84] Creating CNI manager for "flannel"
	I0414 13:50:58.446085 1232896 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0414 13:50:58.447927 1232896 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 13:50:58.453424 1232896 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 13:50:58.453454 1232896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0414 13:50:58.481635 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 13:50:59.015577 1232896 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 13:50:59.015694 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:50:59.015723 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-734713 minikube.k8s.io/updated_at=2025_04_14T13_50_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=9d10b41d083ee2064b1b8c7e16503e13b1847696 minikube.k8s.io/name=flannel-734713 minikube.k8s.io/primary=true
	I0414 13:50:59.057626 1232896 ops.go:34] apiserver oom_adj: -16
	I0414 13:50:59.045217 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:59.045923 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:59.045959 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:59.045874 1234612 retry.go:31] will retry after 4.020907731s: waiting for domain to come up
	I0414 13:50:59.230240 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:50:59.731059 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:00.231185 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:00.730623 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:01.230917 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:01.730415 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:02.230902 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:02.731378 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:02.821756 1232896 kubeadm.go:1113] duration metric: took 3.806160411s to wait for elevateKubeSystemPrivileges
	I0414 13:51:02.821794 1232896 kubeadm.go:394] duration metric: took 15.906125614s to StartCluster
	I0414 13:51:02.821815 1232896 settings.go:142] acquiring lock: {Name:mkc68e13b098b3e7461fc88804a0aed191118bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:02.821906 1232896 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:51:02.822887 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/kubeconfig: {Name:mk5eb6c4765d4c70f1db00acbce88c0952cb579b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:02.823206 1232896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 13:51:02.823219 1232896 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 13:51:02.823292 1232896 addons.go:69] Setting storage-provisioner=true in profile "flannel-734713"
	I0414 13:51:02.823196 1232896 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:51:02.823331 1232896 addons.go:238] Setting addon storage-provisioner=true in "flannel-734713"
	I0414 13:51:02.823338 1232896 addons.go:69] Setting default-storageclass=true in profile "flannel-734713"
	I0414 13:51:02.823360 1232896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-734713"
	I0414 13:51:02.823365 1232896 host.go:66] Checking if "flannel-734713" exists ...
	I0414 13:51:02.823434 1232896 config.go:182] Loaded profile config "flannel-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:51:02.823847 1232896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:02.823889 1232896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:02.823921 1232896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:02.823893 1232896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:02.826819 1232896 out.go:177] * Verifying Kubernetes components...
	I0414 13:51:02.828426 1232896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:51:02.843993 1232896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0414 13:51:02.844013 1232896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0414 13:51:02.844630 1232896 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:02.844692 1232896 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:02.845173 1232896 main.go:141] libmachine: Using API Version  1
	I0414 13:51:02.845195 1232896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:02.845358 1232896 main.go:141] libmachine: Using API Version  1
	I0414 13:51:02.845384 1232896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:02.845619 1232896 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:02.845825 1232896 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:02.846069 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetState
	I0414 13:51:02.846195 1232896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:02.846234 1232896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:02.850656 1232896 addons.go:238] Setting addon default-storageclass=true in "flannel-734713"
	I0414 13:51:02.850733 1232896 host.go:66] Checking if "flannel-734713" exists ...
	I0414 13:51:02.851157 1232896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:02.851218 1232896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:02.865404 1232896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0414 13:51:02.865992 1232896 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:02.866625 1232896 main.go:141] libmachine: Using API Version  1
	I0414 13:51:02.866660 1232896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:02.867208 1232896 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:02.867449 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetState
	I0414 13:51:02.870195 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:51:02.870821 1232896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0414 13:51:02.871400 1232896 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:02.871938 1232896 main.go:141] libmachine: Using API Version  1
	I0414 13:51:02.871970 1232896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:02.872415 1232896 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:02.872930 1232896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:02.872994 1232896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:02.873287 1232896 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:51:02.875817 1232896 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:51:02.875844 1232896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 13:51:02.875872 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:51:02.880791 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:51:02.881517 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:51:02.881554 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:51:02.881753 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:51:02.881986 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:51:02.882194 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:51:02.882378 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:51:02.890324 1232896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42165
	I0414 13:51:02.890846 1232896 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:02.891374 1232896 main.go:141] libmachine: Using API Version  1
	I0414 13:51:02.891395 1232896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:02.891861 1232896 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:02.892046 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetState
	I0414 13:51:02.894013 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:51:02.894281 1232896 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 13:51:02.894305 1232896 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 13:51:02.894327 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:51:02.897914 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:51:02.898527 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:51:02.898572 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:51:02.898778 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:51:02.899040 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:51:02.899308 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:51:02.899484 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:51:03.017123 1232896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 13:51:03.075403 1232896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:51:03.249352 1232896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:51:03.256733 1232896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 13:51:03.844367 1232896 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0414 13:51:03.845459 1232896 node_ready.go:35] waiting up to 15m0s for node "flannel-734713" to be "Ready" ...
	I0414 13:51:04.249421 1232896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00000851s)
	I0414 13:51:04.249487 1232896 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:04.249494 1232896 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:04.249521 1232896 main.go:141] libmachine: (flannel-734713) Calling .Close
	I0414 13:51:04.249504 1232896 main.go:141] libmachine: (flannel-734713) Calling .Close
	I0414 13:51:04.249875 1232896 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:04.249892 1232896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:04.249901 1232896 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:04.249908 1232896 main.go:141] libmachine: (flannel-734713) Calling .Close
	I0414 13:51:04.249920 1232896 main.go:141] libmachine: (flannel-734713) DBG | Closing plugin on server side
	I0414 13:51:04.249960 1232896 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:04.249967 1232896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:04.249974 1232896 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:04.249980 1232896 main.go:141] libmachine: (flannel-734713) Calling .Close
	I0414 13:51:04.250362 1232896 main.go:141] libmachine: (flannel-734713) DBG | Closing plugin on server side
	I0414 13:51:04.250403 1232896 main.go:141] libmachine: (flannel-734713) DBG | Closing plugin on server side
	I0414 13:51:04.250443 1232896 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:04.250451 1232896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:04.250528 1232896 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:04.250544 1232896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:04.283799 1232896 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:04.283841 1232896 main.go:141] libmachine: (flannel-734713) Calling .Close
	I0414 13:51:04.284210 1232896 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:04.284235 1232896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:04.286451 1232896 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 13:51:03.069151 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.069848 1234466 main.go:141] libmachine: (bridge-734713) found domain IP: 192.168.50.72
	I0414 13:51:03.069949 1234466 main.go:141] libmachine: (bridge-734713) reserving static IP address...
	I0414 13:51:03.069973 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has current primary IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.070380 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find host DHCP lease matching {name: "bridge-734713", mac: "52:54:00:35:90:d7", ip: "192.168.50.72"} in network mk-bridge-734713
	I0414 13:51:03.192263 1234466 main.go:141] libmachine: (bridge-734713) DBG | Getting to WaitForSSH function...
	I0414 13:51:03.192303 1234466 main.go:141] libmachine: (bridge-734713) reserved static IP address 192.168.50.72 for domain bridge-734713
	I0414 13:51:03.192318 1234466 main.go:141] libmachine: (bridge-734713) waiting for SSH...
	I0414 13:51:03.196253 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.197270 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.197346 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.197591 1234466 main.go:141] libmachine: (bridge-734713) DBG | Using SSH client type: external
	I0414 13:51:03.197621 1234466 main.go:141] libmachine: (bridge-734713) DBG | Using SSH private key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa (-rw-------)
	I0414 13:51:03.197741 1234466 main.go:141] libmachine: (bridge-734713) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 13:51:03.197769 1234466 main.go:141] libmachine: (bridge-734713) DBG | About to run SSH command:
	I0414 13:51:03.197783 1234466 main.go:141] libmachine: (bridge-734713) DBG | exit 0
	I0414 13:51:03.329416 1234466 main.go:141] libmachine: (bridge-734713) DBG | SSH cmd err, output: <nil>: 
	I0414 13:51:03.329855 1234466 main.go:141] libmachine: (bridge-734713) KVM machine creation complete
	I0414 13:51:03.330318 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetConfigRaw
	I0414 13:51:03.331187 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:03.331540 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:03.332028 1234466 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 13:51:03.332056 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetState
	I0414 13:51:03.334536 1234466 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 13:51:03.334563 1234466 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 13:51:03.334570 1234466 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 13:51:03.334579 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:03.339440 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.339955 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.340015 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.340230 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:03.340549 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.340838 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.341016 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:03.341345 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:03.341659 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:03.341675 1234466 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 13:51:03.459720 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:51:03.459749 1234466 main.go:141] libmachine: Detecting the provisioner...
	I0414 13:51:03.459757 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:03.463722 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.466185 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:03.464540 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.466262 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.467868 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.468269 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.468605 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:03.468920 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:03.469331 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:03.469364 1234466 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 13:51:03.588935 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 13:51:03.589031 1234466 main.go:141] libmachine: found compatible host: buildroot
	I0414 13:51:03.589038 1234466 main.go:141] libmachine: Provisioning with buildroot...
	I0414 13:51:03.589047 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetMachineName
	I0414 13:51:03.589391 1234466 buildroot.go:166] provisioning hostname "bridge-734713"
	I0414 13:51:03.589424 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetMachineName
	I0414 13:51:03.589671 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:03.592944 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.593484 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.593514 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.593785 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:03.594064 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.594233 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.594405 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:03.594611 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:03.594897 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:03.594923 1234466 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-734713 && echo "bridge-734713" | sudo tee /etc/hostname
	I0414 13:51:03.732323 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-734713
	
	I0414 13:51:03.732364 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:03.735571 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.736034 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.736078 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.736344 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:03.736594 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.736852 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.737040 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:03.737286 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:03.737645 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:03.737670 1234466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-734713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-734713/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-734713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:51:03.861255 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:51:03.861290 1234466 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20384-1167927/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-1167927/.minikube}
	I0414 13:51:03.861322 1234466 buildroot.go:174] setting up certificates
	I0414 13:51:03.861338 1234466 provision.go:84] configureAuth start
	I0414 13:51:03.861355 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetMachineName
	I0414 13:51:03.861685 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetIP
	I0414 13:51:03.865590 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.866025 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.866062 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.866348 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:03.869442 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.869898 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.869926 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.870121 1234466 provision.go:143] copyHostCerts
	I0414 13:51:03.870185 1234466 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem, removing ...
	I0414 13:51:03.870212 1234466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem
	I0414 13:51:03.870278 1234466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem (1078 bytes)
	I0414 13:51:03.870420 1234466 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem, removing ...
	I0414 13:51:03.870433 1234466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem
	I0414 13:51:03.870464 1234466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem (1123 bytes)
	I0414 13:51:03.870536 1234466 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem, removing ...
	I0414 13:51:03.870547 1234466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem
	I0414 13:51:03.870573 1234466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem (1675 bytes)
	I0414 13:51:03.870642 1234466 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem org=jenkins.bridge-734713 san=[127.0.0.1 192.168.50.72 bridge-734713 localhost minikube]
	I0414 13:51:04.001290 1234466 provision.go:177] copyRemoteCerts
	I0414 13:51:04.001357 1234466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:51:04.001402 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.006612 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.007201 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.007264 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.007527 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.007948 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.008399 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.008658 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:04.110676 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 13:51:04.146328 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 13:51:04.183404 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:51:04.218879 1234466 provision.go:87] duration metric: took 357.5208ms to configureAuth
	I0414 13:51:04.218928 1234466 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:51:04.219185 1234466 config.go:182] Loaded profile config "bridge-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:51:04.219366 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.222985 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.223614 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.223759 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.223897 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.224211 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.224507 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.224716 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.224950 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:04.225304 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:04.225326 1234466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:51:04.471711 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:51:04.471745 1234466 main.go:141] libmachine: Checking connection to Docker...
	I0414 13:51:04.471752 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetURL
	I0414 13:51:04.473082 1234466 main.go:141] libmachine: (bridge-734713) DBG | using libvirt version 6000000
	I0414 13:51:04.475648 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.476214 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.476311 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.476447 1234466 main.go:141] libmachine: Docker is up and running!
	I0414 13:51:04.476469 1234466 main.go:141] libmachine: Reticulating splines...
	I0414 13:51:04.476479 1234466 client.go:171] duration metric: took 25.134294563s to LocalClient.Create
	I0414 13:51:04.476516 1234466 start.go:167] duration metric: took 25.134391195s to libmachine.API.Create "bridge-734713"
	I0414 13:51:04.476531 1234466 start.go:293] postStartSetup for "bridge-734713" (driver="kvm2")
	I0414 13:51:04.476547 1234466 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:51:04.476577 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:04.476946 1234466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:51:04.477019 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.481423 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.481812 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.481848 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.482092 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.482350 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.482602 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.482761 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:04.575131 1234466 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:51:04.581996 1234466 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:51:04.582039 1234466 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/addons for local assets ...
	I0414 13:51:04.582111 1234466 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/files for local assets ...
	I0414 13:51:04.582198 1234466 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem -> 11757462.pem in /etc/ssl/certs
	I0414 13:51:04.582349 1234466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:51:04.597737 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:51:04.630897 1234466 start.go:296] duration metric: took 154.348306ms for postStartSetup
	I0414 13:51:04.630977 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetConfigRaw
	I0414 13:51:04.631758 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetIP
	I0414 13:51:04.635561 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.636203 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.636239 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.636657 1234466 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/config.json ...
	I0414 13:51:04.636995 1234466 start.go:128] duration metric: took 25.323511417s to createHost
	I0414 13:51:04.637034 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.641757 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.642490 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.642541 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.643058 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.643476 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.643753 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.643948 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.644212 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:04.644436 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:04.644450 1234466 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:51:04.756989 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744638664.694741335
	
	I0414 13:51:04.757017 1234466 fix.go:216] guest clock: 1744638664.694741335
	I0414 13:51:04.757025 1234466 fix.go:229] Guest: 2025-04-14 13:51:04.694741335 +0000 UTC Remote: 2025-04-14 13:51:04.637015139 +0000 UTC m=+33.216809105 (delta=57.726196ms)
	I0414 13:51:04.757056 1234466 fix.go:200] guest clock delta is within tolerance: 57.726196ms
	I0414 13:51:04.757063 1234466 start.go:83] releasing machines lock for "bridge-734713", held for 25.443823589s
	I0414 13:51:04.757088 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:04.757537 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetIP
	I0414 13:51:04.760882 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.761347 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.761397 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.761589 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:04.762531 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:04.762812 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:04.762952 1234466 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:51:04.763022 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.763209 1234466 ssh_runner.go:195] Run: cat /version.json
	I0414 13:51:04.763244 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.767014 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.767444 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.767702 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.767746 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.767912 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.768091 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.768141 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.768206 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.768336 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.768453 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.768561 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.768612 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:04.768741 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.768921 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:04.848806 1234466 ssh_runner.go:195] Run: systemctl --version
	I0414 13:51:04.875088 1234466 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:51:05.047562 1234466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:51:05.054732 1234466 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:51:05.054818 1234466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:51:05.074078 1234466 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 13:51:05.074135 1234466 start.go:495] detecting cgroup driver to use...
	I0414 13:51:05.074213 1234466 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:51:05.094791 1234466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:51:05.111331 1234466 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:51:05.111394 1234466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:51:05.127960 1234466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:51:05.146340 1234466 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:51:05.273211 1234466 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:51:05.466394 1234466 docker.go:233] disabling docker service ...
	I0414 13:51:05.466486 1234466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:51:05.484275 1234466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:51:05.501574 1234466 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:51:05.634554 1234466 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:51:05.775588 1234466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:51:05.793697 1234466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:51:05.818000 1234466 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 13:51:05.818084 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.831265 1234466 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:51:05.831405 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.843451 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.857199 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.868766 1234466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:51:05.881637 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.895782 1234466 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.919082 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.931968 1234466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:51:05.943972 1234466 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 13:51:05.944074 1234466 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 13:51:05.961516 1234466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:51:05.974657 1234466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:51:06.120359 1234466 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:51:06.236997 1234466 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:51:06.237097 1234466 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:51:06.242869 1234466 start.go:563] Will wait 60s for crictl version
	I0414 13:51:06.242997 1234466 ssh_runner.go:195] Run: which crictl
	I0414 13:51:06.248106 1234466 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:51:06.289383 1234466 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:51:06.289474 1234466 ssh_runner.go:195] Run: crio --version
	I0414 13:51:06.323282 1234466 ssh_runner.go:195] Run: crio --version
	I0414 13:51:06.361881 1234466 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 13:51:06.363811 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetIP
	I0414 13:51:06.367171 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:06.367826 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:06.367890 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:06.368205 1234466 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 13:51:06.373526 1234466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:51:06.389523 1234466 kubeadm.go:883] updating cluster {Name:bridge-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-734713 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:51:06.389650 1234466 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 13:51:06.389719 1234466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:51:06.429517 1234466 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 13:51:06.429611 1234466 ssh_runner.go:195] Run: which lz4
	I0414 13:51:06.434072 1234466 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:51:06.438894 1234466 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:51:06.438941 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 13:51:04.288053 1232896 addons.go:514] duration metric: took 1.464821279s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 13:51:04.349878 1232896 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-734713" context rescaled to 1 replicas
	I0414 13:51:05.848973 1232896 node_ready.go:53] node "flannel-734713" has status "Ready":"False"
	I0414 13:51:07.850481 1232896 node_ready.go:53] node "flannel-734713" has status "Ready":"False"
	I0414 13:51:08.005562 1234466 crio.go:462] duration metric: took 1.571586403s to copy over tarball
	I0414 13:51:08.005652 1234466 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:51:10.824904 1234466 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.819216537s)
	I0414 13:51:10.824951 1234466 crio.go:469] duration metric: took 2.819355986s to extract the tarball
	I0414 13:51:10.824963 1234466 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:51:10.869465 1234466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:51:10.915475 1234466 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 13:51:10.915505 1234466 cache_images.go:84] Images are preloaded, skipping loading
	I0414 13:51:10.915514 1234466 kubeadm.go:934] updating node { 192.168.50.72 8443 v1.32.2 crio true true} ...
	I0414 13:51:10.915648 1234466 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-734713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-734713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0414 13:51:10.915776 1234466 ssh_runner.go:195] Run: crio config
	I0414 13:51:10.965367 1234466 cni.go:84] Creating CNI manager for "bridge"
	I0414 13:51:10.965397 1234466 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:51:10.965419 1234466 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-734713 NodeName:bridge-734713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 13:51:10.965567 1234466 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-734713"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.72"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:51:10.965653 1234466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 13:51:10.976611 1234466 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:51:10.976704 1234466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:51:10.988488 1234466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0414 13:51:11.008049 1234466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:51:11.026412 1234466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0414 13:51:11.046765 1234466 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0414 13:51:11.052685 1234466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:51:11.068821 1234466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:51:11.199728 1234466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:51:11.222437 1234466 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713 for IP: 192.168.50.72
	I0414 13:51:11.222461 1234466 certs.go:194] generating shared ca certs ...
	I0414 13:51:11.222480 1234466 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:11.222636 1234466 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 13:51:11.222673 1234466 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 13:51:11.222680 1234466 certs.go:256] generating profile certs ...
	I0414 13:51:11.222732 1234466 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.key
	I0414 13:51:11.222747 1234466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt with IP's: []
	I0414 13:51:11.365988 1234466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt ...
	I0414 13:51:11.366026 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: {Name:mk1c3cc5be5c7be288ffe1c32f0a1821e7236131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:11.450947 1234466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.key ...
	I0414 13:51:11.451016 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.key: {Name:mk689ac422a768ba2f3657cd71c037393bb8d2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:11.451202 1234466 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key.61a2bbaa
	I0414 13:51:11.451242 1234466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt.61a2bbaa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.72]
	I0414 13:51:11.497666 1234466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt.61a2bbaa ...
	I0414 13:51:11.497708 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt.61a2bbaa: {Name:mk4bdd975f2523e3521ea8be6415827ba4579231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:11.612862 1234466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key.61a2bbaa ...
	I0414 13:51:11.612902 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key.61a2bbaa: {Name:mkef710d1e4f54f4806f158371103d3edd21f34c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:11.613047 1234466 certs.go:381] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt.61a2bbaa -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt
	I0414 13:51:11.613163 1234466 certs.go:385] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key.61a2bbaa -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key
	I0414 13:51:11.613305 1234466 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.key
	I0414 13:51:11.613336 1234466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.crt with IP's: []
	I0414 13:51:12.411383 1234466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.crt ...
	I0414 13:51:12.411424 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.crt: {Name:mk79b09a5bdb6725dd71016772cd31da197161cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:12.411622 1234466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.key ...
	I0414 13:51:12.411637 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.key: {Name:mk560c1ea1a143fd965d6012faf57eea3e2d6f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:12.411846 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem (1338 bytes)
	W0414 13:51:12.411886 1234466 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746_empty.pem, impossibly tiny 0 bytes
	I0414 13:51:12.411893 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:51:12.411915 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:51:12.411935 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:51:12.411959 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 13:51:12.411997 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:51:12.412576 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:51:12.444925 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 13:51:12.474743 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:51:12.504980 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:51:12.536499 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 13:51:12.563689 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 13:51:12.590634 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:51:12.635261 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 13:51:12.666123 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:51:12.694495 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem --> /usr/share/ca-certificates/1175746.pem (1338 bytes)
	I0414 13:51:12.728746 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /usr/share/ca-certificates/11757462.pem (1708 bytes)
	I0414 13:51:12.758377 1234466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:51:12.779305 1234466 ssh_runner.go:195] Run: openssl version
	I0414 13:51:12.786047 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11757462.pem && ln -fs /usr/share/ca-certificates/11757462.pem /etc/ssl/certs/11757462.pem"
	I0414 13:51:12.798761 1234466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11757462.pem
	I0414 13:51:12.804619 1234466 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:27 /usr/share/ca-certificates/11757462.pem
	I0414 13:51:12.804735 1234466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11757462.pem
	I0414 13:51:12.812071 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11757462.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:51:12.827337 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:51:12.841867 1234466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:51:12.847184 1234466 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:51:12.847274 1234466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:51:12.856080 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:51:12.876470 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1175746.pem && ln -fs /usr/share/ca-certificates/1175746.pem /etc/ssl/certs/1175746.pem"
	I0414 13:51:12.892156 1234466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1175746.pem
	I0414 13:51:12.897657 1234466 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:27 /usr/share/ca-certificates/1175746.pem
	I0414 13:51:12.897744 1234466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1175746.pem
	I0414 13:51:12.904810 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1175746.pem /etc/ssl/certs/51391683.0"
	I0414 13:51:12.917789 1234466 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:51:12.923108 1234466 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 13:51:12.923173 1234466 kubeadm.go:392] StartCluster: {Name:bridge-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-734713 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:51:12.923248 1234466 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:51:12.923303 1234466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:51:12.962335 1234466 cri.go:89] found id: ""
	I0414 13:51:12.962418 1234466 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:51:12.973991 1234466 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:51:12.985345 1234466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:51:12.995695 1234466 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:51:12.995720 1234466 kubeadm.go:157] found existing configuration files:
	
	I0414 13:51:12.995783 1234466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:51:13.005622 1234466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:51:13.005697 1234466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:51:13.016784 1234466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:51:13.027489 1234466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:51:13.027586 1234466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:51:13.039289 1234466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:51:13.051866 1234466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:51:13.051980 1234466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:51:13.063819 1234466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:51:13.075853 1234466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:51:13.075919 1234466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:51:13.087095 1234466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:51:13.154406 1234466 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 13:51:13.154587 1234466 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:51:13.278999 1234466 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:51:13.279188 1234466 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:51:13.279334 1234466 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 13:51:13.289166 1234466 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:51:10.349310 1232896 node_ready.go:53] node "flannel-734713" has status "Ready":"False"
	I0414 13:51:11.892213 1232896 node_ready.go:49] node "flannel-734713" has status "Ready":"True"
	I0414 13:51:11.892247 1232896 node_ready.go:38] duration metric: took 8.046750945s for node "flannel-734713" to be "Ready" ...
	I0414 13:51:11.892257 1232896 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:51:13.051069 1232896 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-8492w" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:13.335073 1234466 out.go:235]   - Generating certificates and keys ...
	I0414 13:51:13.335217 1234466 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:51:13.335366 1234466 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:51:13.992852 1234466 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 13:51:14.120934 1234466 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 13:51:14.598211 1234466 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 13:51:15.150015 1234466 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 13:51:15.222014 1234466 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 13:51:15.222395 1234466 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-734713 localhost] and IPs [192.168.50.72 127.0.0.1 ::1]
	I0414 13:51:15.329052 1234466 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 13:51:15.329421 1234466 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-734713 localhost] and IPs [192.168.50.72 127.0.0.1 ::1]
	I0414 13:51:15.545238 1234466 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 13:51:15.704672 1234466 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 13:51:15.786513 1234466 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 13:51:15.786626 1234466 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:51:15.897209 1234466 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:51:16.070403 1234466 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 13:51:16.151058 1234466 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:51:16.220858 1234466 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:51:16.431256 1234466 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:51:16.431884 1234466 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:51:16.434284 1234466 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:51:16.437374 1234466 out.go:235]   - Booting up control plane ...
	I0414 13:51:16.437488 1234466 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:51:16.437576 1234466 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:51:16.437664 1234466 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:51:16.456316 1234466 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:51:16.463413 1234466 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:51:16.463494 1234466 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:51:15.064385 1232896 pod_ready.go:103] pod "coredns-668d6bf9bc-8492w" in "kube-system" namespace has status "Ready":"False"
	I0414 13:51:16.558699 1232896 pod_ready.go:93] pod "coredns-668d6bf9bc-8492w" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:16.558738 1232896 pod_ready.go:82] duration metric: took 3.50762195s for pod "coredns-668d6bf9bc-8492w" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.558755 1232896 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.564828 1232896 pod_ready.go:93] pod "etcd-flannel-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:16.564874 1232896 pod_ready.go:82] duration metric: took 6.109539ms for pod "etcd-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.564894 1232896 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.570966 1232896 pod_ready.go:93] pod "kube-apiserver-flannel-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:16.571007 1232896 pod_ready.go:82] duration metric: took 6.102875ms for pod "kube-apiserver-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.571025 1232896 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.576522 1232896 pod_ready.go:93] pod "kube-controller-manager-flannel-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:16.576553 1232896 pod_ready.go:82] duration metric: took 5.519953ms for pod "kube-controller-manager-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.576565 1232896 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5s8qf" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.633709 1232896 pod_ready.go:93] pod "kube-proxy-5s8qf" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:16.633746 1232896 pod_ready.go:82] duration metric: took 57.173138ms for pod "kube-proxy-5s8qf" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.633760 1232896 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:17.032077 1232896 pod_ready.go:93] pod "kube-scheduler-flannel-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:17.032107 1232896 pod_ready.go:82] duration metric: took 398.339484ms for pod "kube-scheduler-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:17.032119 1232896 pod_ready.go:39] duration metric: took 5.139837902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:51:17.032142 1232896 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:51:17.032202 1232896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:51:17.051024 1232896 api_server.go:72] duration metric: took 14.227638835s to wait for apiserver process to appear ...
	I0414 13:51:17.051068 1232896 api_server.go:88] waiting for apiserver healthz status ...
	I0414 13:51:17.051101 1232896 api_server.go:253] Checking apiserver healthz at https://192.168.72.152:8443/healthz ...
	I0414 13:51:17.056849 1232896 api_server.go:279] https://192.168.72.152:8443/healthz returned 200:
	ok
	I0414 13:51:17.058306 1232896 api_server.go:141] control plane version: v1.32.2
	I0414 13:51:17.058339 1232896 api_server.go:131] duration metric: took 7.261984ms to wait for apiserver health ...
	I0414 13:51:17.058352 1232896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 13:51:17.233626 1232896 system_pods.go:59] 7 kube-system pods found
	I0414 13:51:17.233668 1232896 system_pods.go:61] "coredns-668d6bf9bc-8492w" [2b8a316e-51f4-421a-92a6-3f073f3aa973] Running
	I0414 13:51:17.233675 1232896 system_pods.go:61] "etcd-flannel-734713" [4ac0fa30-64f5-4a90-bf1e-ddff2f35b0b0] Running
	I0414 13:51:17.233681 1232896 system_pods.go:61] "kube-apiserver-flannel-734713" [d5144931-50e7-4e41-83e4-1e1fbe41f37a] Running
	I0414 13:51:17.233689 1232896 system_pods.go:61] "kube-controller-manager-flannel-734713" [905b4b89-7621-4191-bf5c-96cc140ca066] Running
	I0414 13:51:17.233694 1232896 system_pods.go:61] "kube-proxy-5s8qf" [a9d515b7-10fe-4b11-81a5-194f1db2490a] Running
	I0414 13:51:17.233700 1232896 system_pods.go:61] "kube-scheduler-flannel-734713" [8a6a7583-a85b-4bf2-a0c1-440b86a31937] Running
	I0414 13:51:17.233704 1232896 system_pods.go:61] "storage-provisioner" [dd623537-993a-42ab-a151-492270dce1e4] Running
	I0414 13:51:17.233712 1232896 system_pods.go:74] duration metric: took 175.353961ms to wait for pod list to return data ...
	I0414 13:51:17.233725 1232896 default_sa.go:34] waiting for default service account to be created ...
	I0414 13:51:17.433042 1232896 default_sa.go:45] found service account: "default"
	I0414 13:51:17.433073 1232896 default_sa.go:55] duration metric: took 199.341251ms for default service account to be created ...
	I0414 13:51:17.433084 1232896 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 13:51:17.632131 1232896 system_pods.go:86] 7 kube-system pods found
	I0414 13:51:17.632172 1232896 system_pods.go:89] "coredns-668d6bf9bc-8492w" [2b8a316e-51f4-421a-92a6-3f073f3aa973] Running
	I0414 13:51:17.632178 1232896 system_pods.go:89] "etcd-flannel-734713" [4ac0fa30-64f5-4a90-bf1e-ddff2f35b0b0] Running
	I0414 13:51:17.632182 1232896 system_pods.go:89] "kube-apiserver-flannel-734713" [d5144931-50e7-4e41-83e4-1e1fbe41f37a] Running
	I0414 13:51:17.632186 1232896 system_pods.go:89] "kube-controller-manager-flannel-734713" [905b4b89-7621-4191-bf5c-96cc140ca066] Running
	I0414 13:51:17.632189 1232896 system_pods.go:89] "kube-proxy-5s8qf" [a9d515b7-10fe-4b11-81a5-194f1db2490a] Running
	I0414 13:51:17.632193 1232896 system_pods.go:89] "kube-scheduler-flannel-734713" [8a6a7583-a85b-4bf2-a0c1-440b86a31937] Running
	I0414 13:51:17.632196 1232896 system_pods.go:89] "storage-provisioner" [dd623537-993a-42ab-a151-492270dce1e4] Running
	I0414 13:51:17.632202 1232896 system_pods.go:126] duration metric: took 199.11238ms to wait for k8s-apps to be running ...
	I0414 13:51:17.632211 1232896 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 13:51:17.632261 1232896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:51:17.650071 1232896 system_svc.go:56] duration metric: took 17.845536ms WaitForService to wait for kubelet
	I0414 13:51:17.650111 1232896 kubeadm.go:582] duration metric: took 14.826762399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:51:17.650133 1232896 node_conditions.go:102] verifying NodePressure condition ...
	I0414 13:51:17.831879 1232896 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 13:51:17.831924 1232896 node_conditions.go:123] node cpu capacity is 2
	I0414 13:51:17.831957 1232896 node_conditions.go:105] duration metric: took 181.817034ms to run NodePressure ...
	I0414 13:51:17.831976 1232896 start.go:241] waiting for startup goroutines ...
	I0414 13:51:17.831987 1232896 start.go:246] waiting for cluster config update ...
	I0414 13:51:17.832003 1232896 start.go:255] writing updated cluster config ...
	I0414 13:51:17.832321 1232896 ssh_runner.go:195] Run: rm -f paused
	I0414 13:51:17.892758 1232896 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 13:51:17.895371 1232896 out.go:177] * Done! kubectl is now configured to use "flannel-734713" cluster and "default" namespace by default
	I0414 13:51:16.600377 1234466 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 13:51:16.600569 1234466 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 13:51:17.601297 1234466 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001158671s
	I0414 13:51:17.601416 1234466 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 13:51:23.104197 1234466 kubeadm.go:310] [api-check] The API server is healthy after 5.502711209s
	I0414 13:51:23.119451 1234466 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 13:51:23.135571 1234466 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 13:51:23.172527 1234466 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 13:51:23.172760 1234466 kubeadm.go:310] [mark-control-plane] Marking the node bridge-734713 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 13:51:23.186500 1234466 kubeadm.go:310] [bootstrap-token] Using token: yq0cb1.eusflw7rmhjh0znj
	I0414 13:51:23.188519 1234466 out.go:235]   - Configuring RBAC rules ...
	I0414 13:51:23.188689 1234466 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 13:51:23.203763 1234466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 13:51:23.215702 1234466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 13:51:23.222090 1234466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 13:51:23.227812 1234466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 13:51:23.233964 1234466 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 13:51:23.511520 1234466 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 13:51:23.988516 1234466 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 13:51:24.512335 1234466 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 13:51:24.513730 1234466 kubeadm.go:310] 
	I0414 13:51:24.513834 1234466 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 13:51:24.513855 1234466 kubeadm.go:310] 
	I0414 13:51:24.513951 1234466 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 13:51:24.513960 1234466 kubeadm.go:310] 
	I0414 13:51:24.513994 1234466 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 13:51:24.514078 1234466 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 13:51:24.514154 1234466 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 13:51:24.514163 1234466 kubeadm.go:310] 
	I0414 13:51:24.514232 1234466 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 13:51:24.514249 1234466 kubeadm.go:310] 
	I0414 13:51:24.514294 1234466 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 13:51:24.514300 1234466 kubeadm.go:310] 
	I0414 13:51:24.514343 1234466 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 13:51:24.514404 1234466 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 13:51:24.514461 1234466 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 13:51:24.514464 1234466 kubeadm.go:310] 
	I0414 13:51:24.514574 1234466 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 13:51:24.514681 1234466 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 13:51:24.514691 1234466 kubeadm.go:310] 
	I0414 13:51:24.514837 1234466 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yq0cb1.eusflw7rmhjh0znj \
	I0414 13:51:24.514989 1234466 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5f2612fbe124e36b509339bfdb73251340c2c4faa099e85bd5300a27bddf90e9 \
	I0414 13:51:24.515043 1234466 kubeadm.go:310] 	--control-plane 
	I0414 13:51:24.515086 1234466 kubeadm.go:310] 
	I0414 13:51:24.515242 1234466 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 13:51:24.515267 1234466 kubeadm.go:310] 
	I0414 13:51:24.515375 1234466 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yq0cb1.eusflw7rmhjh0znj \
	I0414 13:51:24.515520 1234466 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5f2612fbe124e36b509339bfdb73251340c2c4faa099e85bd5300a27bddf90e9 
	I0414 13:51:24.516591 1234466 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:51:24.516632 1234466 cni.go:84] Creating CNI manager for "bridge"
	I0414 13:51:24.518631 1234466 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 13:51:24.520244 1234466 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 13:51:24.530819 1234466 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 13:51:24.555410 1234466 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 13:51:24.555491 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:24.555547 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-734713 minikube.k8s.io/updated_at=2025_04_14T13_51_24_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=9d10b41d083ee2064b1b8c7e16503e13b1847696 minikube.k8s.io/name=bridge-734713 minikube.k8s.io/primary=true
	I0414 13:51:24.585267 1234466 ops.go:34] apiserver oom_adj: -16
	I0414 13:51:24.714624 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:25.215362 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:25.715448 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:26.215573 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:26.715531 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:27.215364 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:27.715418 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:28.215598 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:28.714775 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:28.806470 1234466 kubeadm.go:1113] duration metric: took 4.251055299s to wait for elevateKubeSystemPrivileges
	I0414 13:51:28.806531 1234466 kubeadm.go:394] duration metric: took 15.883361s to StartCluster
	I0414 13:51:28.806560 1234466 settings.go:142] acquiring lock: {Name:mkc68e13b098b3e7461fc88804a0aed191118bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:28.806675 1234466 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:51:28.807853 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/kubeconfig: {Name:mk5eb6c4765d4c70f1db00acbce88c0952cb579b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:28.808159 1234466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 13:51:28.808158 1234466 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:51:28.808314 1234466 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 13:51:28.808443 1234466 config.go:182] Loaded profile config "bridge-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:51:28.808461 1234466 addons.go:69] Setting default-storageclass=true in profile "bridge-734713"
	I0414 13:51:28.808490 1234466 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-734713"
	I0414 13:51:28.808443 1234466 addons.go:69] Setting storage-provisioner=true in profile "bridge-734713"
	I0414 13:51:28.808529 1234466 addons.go:238] Setting addon storage-provisioner=true in "bridge-734713"
	I0414 13:51:28.808576 1234466 host.go:66] Checking if "bridge-734713" exists ...
	I0414 13:51:28.809008 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:28.809009 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:28.809061 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:28.809074 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:28.809985 1234466 out.go:177] * Verifying Kubernetes components...
	I0414 13:51:28.811969 1234466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:51:28.832482 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I0414 13:51:28.832765 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0414 13:51:28.833086 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:28.833355 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:28.833721 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:51:28.833742 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:28.833886 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:51:28.833920 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:28.834368 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:28.834501 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:28.834747 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetState
	I0414 13:51:28.835480 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:28.835539 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:28.839815 1234466 addons.go:238] Setting addon default-storageclass=true in "bridge-734713"
	I0414 13:51:28.839874 1234466 host.go:66] Checking if "bridge-734713" exists ...
	I0414 13:51:28.840302 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:28.840341 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:28.854113 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I0414 13:51:28.854669 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:28.855231 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:51:28.855258 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:28.855711 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:28.855915 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetState
	I0414 13:51:28.857140 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43321
	I0414 13:51:28.857755 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:28.858280 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:51:28.858310 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:28.858343 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:28.859784 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:28.860317 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:28.860336 1234466 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:51:28.860346 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:28.861814 1234466 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:51:28.861837 1234466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 13:51:28.861863 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:28.866517 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:28.867027 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:28.867057 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:28.867500 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:28.867763 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:28.867955 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:28.868113 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:28.881764 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42741
	I0414 13:51:28.882567 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:28.883247 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:51:28.883270 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:28.883794 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:28.884041 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetState
	I0414 13:51:28.886084 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:28.886384 1234466 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 13:51:28.886403 1234466 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 13:51:28.886428 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:28.890687 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:28.891208 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:28.891236 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:28.891481 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:28.891753 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:28.891981 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:28.892209 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:29.105890 1234466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 13:51:29.106101 1234466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:51:29.225606 1234466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 13:51:29.360706 1234466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:51:29.879719 1234466 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0414 13:51:29.879875 1234466 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:29.879899 1234466 main.go:141] libmachine: (bridge-734713) Calling .Close
	I0414 13:51:29.880410 1234466 main.go:141] libmachine: (bridge-734713) DBG | Closing plugin on server side
	I0414 13:51:29.880453 1234466 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:29.880473 1234466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:29.880491 1234466 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:29.880517 1234466 main.go:141] libmachine: (bridge-734713) Calling .Close
	I0414 13:51:29.880816 1234466 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:29.880834 1234466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:29.880820 1234466 main.go:141] libmachine: (bridge-734713) DBG | Closing plugin on server side
	I0414 13:51:29.881040 1234466 node_ready.go:35] waiting up to 15m0s for node "bridge-734713" to be "Ready" ...
	I0414 13:51:29.912475 1234466 node_ready.go:49] node "bridge-734713" has status "Ready":"True"
	I0414 13:51:29.912512 1234466 node_ready.go:38] duration metric: took 31.426837ms for node "bridge-734713" to be "Ready" ...
	I0414 13:51:29.912539 1234466 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:51:29.921277 1234466 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:29.921312 1234466 main.go:141] libmachine: (bridge-734713) Calling .Close
	I0414 13:51:29.921670 1234466 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:29.921690 1234466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:29.930866 1234466 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:30.208651 1234466 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:30.208688 1234466 main.go:141] libmachine: (bridge-734713) Calling .Close
	I0414 13:51:30.209285 1234466 main.go:141] libmachine: (bridge-734713) DBG | Closing plugin on server side
	I0414 13:51:30.209346 1234466 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:30.209375 1234466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:30.209397 1234466 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:30.209408 1234466 main.go:141] libmachine: (bridge-734713) Calling .Close
	I0414 13:51:30.209799 1234466 main.go:141] libmachine: (bridge-734713) DBG | Closing plugin on server side
	I0414 13:51:30.209869 1234466 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:30.209902 1234466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:30.212449 1234466 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 13:51:30.214539 1234466 addons.go:514] duration metric: took 1.406225375s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 13:51:30.386708 1234466 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-734713" context rescaled to 1 replicas
	I0414 13:51:31.937908 1234466 pod_ready.go:103] pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status "Ready":"False"
	I0414 13:51:33.938334 1234466 pod_ready.go:103] pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status "Ready":"False"
	I0414 13:51:36.437417 1234466 pod_ready.go:103] pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status "Ready":"False"
	I0414 13:51:38.440165 1234466 pod_ready.go:103] pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status "Ready":"False"
	I0414 13:51:40.938065 1234466 pod_ready.go:98] pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:40 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.72 HostIPs:[{IP:192.168.50.
72}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 13:51:29 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 13:51:30 +0000 UTC,FinishedAt:2025-04-14 13:51:40 +0000 UTC,ContainerID:cri-o://6a7952f59cfef1a40a0dd262985051e5e943a788e6ba50cc4cf0e5f6d9abca2c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://6a7952f59cfef1a40a0dd262985051e5e943a788e6ba50cc4cf0e5f6d9abca2c Started:0xc0023076d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002346b90} {Name:kube-api-access-4n6jv MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002346ba0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 13:51:40.938101 1234466 pod_ready.go:82] duration metric: took 11.007179529s for pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace to be "Ready" ...
	E0414 13:51:40.938117 1234466 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:40 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.72 HostIPs:[{IP:192.168.50.72}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 13:51:29 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 13:51:30 +0000 UTC,FinishedAt:2025-04-14 13:51:40 +0000 UTC,ContainerID:cri-o://6a7952f59cfef1a40a0dd262985051e5e943a788e6ba50cc4cf0e5f6d9abca2c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://6a7952f59cfef1a40a0dd262985051e5e943a788e6ba50cc4cf0e5f6d9abca2c Started:0xc0023076d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002346b90} {Name:kube-api-access-4n6jv MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc002346ba0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 13:51:40.938141 1234466 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-z92sg" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.946346 1234466 pod_ready.go:93] pod "coredns-668d6bf9bc-z92sg" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:40.946379 1234466 pod_ready.go:82] duration metric: took 8.225626ms for pod "coredns-668d6bf9bc-z92sg" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.946392 1234466 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.955269 1234466 pod_ready.go:93] pod "etcd-bridge-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:40.955296 1234466 pod_ready.go:82] duration metric: took 8.896656ms for pod "etcd-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.955307 1234466 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.961998 1234466 pod_ready.go:93] pod "kube-apiserver-bridge-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:40.962029 1234466 pod_ready.go:82] duration metric: took 6.713066ms for pod "kube-apiserver-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.962042 1234466 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.966175 1234466 pod_ready.go:93] pod "kube-controller-manager-bridge-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:40.966217 1234466 pod_ready.go:82] duration metric: took 4.165309ms for pod "kube-controller-manager-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.966236 1234466 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-9pk92" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:41.335640 1234466 pod_ready.go:93] pod "kube-proxy-9pk92" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:41.335691 1234466 pod_ready.go:82] duration metric: took 369.447003ms for pod "kube-proxy-9pk92" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:41.335704 1234466 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:41.735093 1234466 pod_ready.go:93] pod "kube-scheduler-bridge-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:41.735135 1234466 pod_ready.go:82] duration metric: took 399.422375ms for pod "kube-scheduler-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:41.735151 1234466 pod_ready.go:39] duration metric: took 11.822593264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:51:41.735178 1234466 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:51:41.735248 1234466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:51:41.751642 1234466 api_server.go:72] duration metric: took 12.943438798s to wait for apiserver process to appear ...
	I0414 13:51:41.751690 1234466 api_server.go:88] waiting for apiserver healthz status ...
	I0414 13:51:41.751718 1234466 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0414 13:51:41.757618 1234466 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0414 13:51:41.758822 1234466 api_server.go:141] control plane version: v1.32.2
	I0414 13:51:41.758852 1234466 api_server.go:131] duration metric: took 7.153797ms to wait for apiserver health ...
	I0414 13:51:41.758862 1234466 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 13:51:41.937084 1234466 system_pods.go:59] 7 kube-system pods found
	I0414 13:51:41.937135 1234466 system_pods.go:61] "coredns-668d6bf9bc-z92sg" [42590442-580d-41ab-9efe-2517a068eb17] Running
	I0414 13:51:41.937143 1234466 system_pods.go:61] "etcd-bridge-734713" [e47a695e-3434-4c51-afaf-9246153d30a2] Running
	I0414 13:51:41.937148 1234466 system_pods.go:61] "kube-apiserver-bridge-734713" [5ce46cf1-b1f6-4cc0-9cfb-f1d687fe8025] Running
	I0414 13:51:41.937153 1234466 system_pods.go:61] "kube-controller-manager-bridge-734713" [0291b7b4-84c2-4121-b702-5c297388a045] Running
	I0414 13:51:41.937158 1234466 system_pods.go:61] "kube-proxy-9pk92" [d27ad409-1c19-4af9-8a04-d7e93ac9d8e0] Running
	I0414 13:51:41.937163 1234466 system_pods.go:61] "kube-scheduler-bridge-734713" [8ca465da-2ca0-46da-b8eb-504b5f12118e] Running
	I0414 13:51:41.937167 1234466 system_pods.go:61] "storage-provisioner" [98448f6e-961f-4647-bf0b-33237d9f4833] Running
	I0414 13:51:41.937176 1234466 system_pods.go:74] duration metric: took 178.306954ms to wait for pod list to return data ...
	I0414 13:51:41.937187 1234466 default_sa.go:34] waiting for default service account to be created ...
	I0414 13:51:42.135201 1234466 default_sa.go:45] found service account: "default"
	I0414 13:51:42.135235 1234466 default_sa.go:55] duration metric: took 198.036975ms for default service account to be created ...
	I0414 13:51:42.135249 1234466 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 13:51:42.335420 1234466 system_pods.go:86] 7 kube-system pods found
	I0414 13:51:42.335468 1234466 system_pods.go:89] "coredns-668d6bf9bc-z92sg" [42590442-580d-41ab-9efe-2517a068eb17] Running
	I0414 13:51:42.335477 1234466 system_pods.go:89] "etcd-bridge-734713" [e47a695e-3434-4c51-afaf-9246153d30a2] Running
	I0414 13:51:42.335483 1234466 system_pods.go:89] "kube-apiserver-bridge-734713" [5ce46cf1-b1f6-4cc0-9cfb-f1d687fe8025] Running
	I0414 13:51:42.335489 1234466 system_pods.go:89] "kube-controller-manager-bridge-734713" [0291b7b4-84c2-4121-b702-5c297388a045] Running
	I0414 13:51:42.335496 1234466 system_pods.go:89] "kube-proxy-9pk92" [d27ad409-1c19-4af9-8a04-d7e93ac9d8e0] Running
	I0414 13:51:42.335502 1234466 system_pods.go:89] "kube-scheduler-bridge-734713" [8ca465da-2ca0-46da-b8eb-504b5f12118e] Running
	I0414 13:51:42.335507 1234466 system_pods.go:89] "storage-provisioner" [98448f6e-961f-4647-bf0b-33237d9f4833] Running
	I0414 13:51:42.335518 1234466 system_pods.go:126] duration metric: took 200.259515ms to wait for k8s-apps to be running ...
	I0414 13:51:42.335529 1234466 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 13:51:42.335592 1234466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:51:42.353882 1234466 system_svc.go:56] duration metric: took 18.329065ms WaitForService to wait for kubelet
	I0414 13:51:42.353934 1234466 kubeadm.go:582] duration metric: took 13.545741946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:51:42.353954 1234466 node_conditions.go:102] verifying NodePressure condition ...
	I0414 13:51:42.536452 1234466 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 13:51:42.536505 1234466 node_conditions.go:123] node cpu capacity is 2
	I0414 13:51:42.536526 1234466 node_conditions.go:105] duration metric: took 182.566053ms to run NodePressure ...
	I0414 13:51:42.536542 1234466 start.go:241] waiting for startup goroutines ...
	I0414 13:51:42.536552 1234466 start.go:246] waiting for cluster config update ...
	I0414 13:51:42.536568 1234466 start.go:255] writing updated cluster config ...
	I0414 13:51:42.536998 1234466 ssh_runner.go:195] Run: rm -f paused
	I0414 13:51:42.602285 1234466 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 13:51:42.604659 1234466 out.go:177] * Done! kubectl is now configured to use "bridge-734713" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.467792485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639192467768503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3be4802-7bbd-46f4-a3f8-0ecc3c5906e3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.468418560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b77cb967-34e0-4848-8a97-e9a34e317685 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.468533474Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b77cb967-34e0-4848-8a97-e9a34e317685 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.468574384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b77cb967-34e0-4848-8a97-e9a34e317685 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.501373501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98c1a634-2cd0-4661-b495-c5d3dc665e55 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.501512028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98c1a634-2cd0-4661-b495-c5d3dc665e55 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.502770761Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=969448ec-6e48-4bc4-9124-3e3d5f396088 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.503173197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639192503149237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=969448ec-6e48-4bc4-9124-3e3d5f396088 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.503980846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c456338-b48f-4a4e-bbbb-4b77b277dd31 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.504075958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c456338-b48f-4a4e-bbbb-4b77b277dd31 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.504119377Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5c456338-b48f-4a4e-bbbb-4b77b277dd31 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.542381569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16d9eb37-a42c-4e30-b8dd-dd5711384927 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.542500405Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16d9eb37-a42c-4e30-b8dd-dd5711384927 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.544142472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf7c87a3-2c10-4bdb-9492-b9c94b2a4751 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.544618811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639192544589533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf7c87a3-2c10-4bdb-9492-b9c94b2a4751 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.545275780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3af6611d-63c3-4fe6-b1b4-bc212fd028d7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.545326780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3af6611d-63c3-4fe6-b1b4-bc212fd028d7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.545359238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3af6611d-63c3-4fe6-b1b4-bc212fd028d7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.578159626Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb397c3a-67b5-4822-9a8c-1d2639432f93 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.578236494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb397c3a-67b5-4822-9a8c-1d2639432f93 name=/runtime.v1.RuntimeService/Version
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.579705064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ace3d82e-26b2-4bc5-abca-e8e0d2d6d42b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.580087831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639192580060207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ace3d82e-26b2-4bc5-abca-e8e0d2d6d42b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.580659889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7a45cfb-ba5b-44cf-933b-dffad7d7d27c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.580712032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7a45cfb-ba5b-44cf-933b-dffad7d7d27c name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 13:59:52 old-k8s-version-966509 crio[626]: time="2025-04-14 13:59:52.580749427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e7a45cfb-ba5b-44cf-933b-dffad7d7d27c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 13:42] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052303] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037424] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.184043] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.220918] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.632782] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.648390] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.065849] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072695] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.193720] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.145562] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.255078] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +8.069901] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.072646] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.342677] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +8.500421] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 13:46] systemd-fstab-generator[4889]: Ignoring "noauto" option for root device
	[Apr14 13:48] systemd-fstab-generator[5166]: Ignoring "noauto" option for root device
	[  +0.062342] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:59:52 up 17 min,  0 users,  load average: 0.00, 0.03, 0.05
	Linux old-k8s-version-966509 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c870b0, 0xc0002d66a0)
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: goroutine 152 [chan receive]:
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc0008aa510)
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: goroutine 153 [select]:
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000749ef0, 0x4f0ac20, 0xc000c75f90, 0x1, 0xc00009e0c0)
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0009442a0, 0xc00009e0c0)
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c870f0, 0xc0002d6780)
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 14 13:59:52 old-k8s-version-966509 kubelet[6346]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 14 13:59:52 old-k8s-version-966509 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 14 13:59:52 old-k8s-version-966509 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-966509 -n old-k8s-version-966509
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 2 (263.983252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-966509" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (368.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:59:57.145897 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 13:59:59.124245 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:00:05.024231 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:00:26.827859 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:00:33.995441 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:01:01.698712 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:01:17.920175 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:01:43.137067 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:01:45.624405 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:01:49.053545 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:02:10.840945 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:02:40.311796 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/auto-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:03:18.440104 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:03:51.046331 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/kindnet-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:04:37.319699 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/calico-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:04:38.305405 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:04:41.507530 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:04:57.146248 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:04:59.123871 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/custom-flannel-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
E0414 14:05:33.994596 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/enable-default-cni-734713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.227:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.227:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-966509 -n old-k8s-version-966509
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 2 (275.847601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-966509" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-966509 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-966509 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.335µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-966509 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 2 (248.060888ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-966509 logs -n 25
E0414 14:06:01.373446 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-734713 sudo iptables                       | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo docker                         | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo cat                            | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo                                | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo find                           | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-734713 sudo crio                           | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-734713                                     | bridge-734713 | jenkins | v1.35.0 | 14 Apr 25 13:52 UTC | 14 Apr 25 13:52 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 13:50:31
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 13:50:31.474135 1234466 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:50:31.474253 1234466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:50:31.474257 1234466 out.go:358] Setting ErrFile to fd 2...
	I0414 13:50:31.474262 1234466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:50:31.474520 1234466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:50:31.475288 1234466 out.go:352] Setting JSON to false
	I0414 13:50:31.477061 1234466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19979,"bootTime":1744618653,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:50:31.477154 1234466 start.go:139] virtualization: kvm guest
	I0414 13:50:31.479607 1234466 out.go:177] * [bridge-734713] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:50:31.481863 1234466 notify.go:220] Checking for updates...
	I0414 13:50:31.481878 1234466 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 13:50:31.483700 1234466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:50:31.485289 1234466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:50:31.487251 1234466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:50:31.489524 1234466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:50:31.491617 1234466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:50:31.494384 1234466 config.go:182] Loaded profile config "enable-default-cni-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:50:31.494599 1234466 config.go:182] Loaded profile config "flannel-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:50:31.494768 1234466 config.go:182] Loaded profile config "old-k8s-version-966509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 13:50:31.494943 1234466 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:50:31.538765 1234466 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 13:50:31.540246 1234466 start.go:297] selected driver: kvm2
	I0414 13:50:31.540269 1234466 start.go:901] validating driver "kvm2" against <nil>
	I0414 13:50:31.540283 1234466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:50:31.541164 1234466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:50:31.541264 1234466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20384-1167927/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 13:50:31.559397 1234466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 13:50:31.559459 1234466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 13:50:31.559769 1234466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:50:31.559813 1234466 cni.go:84] Creating CNI manager for "bridge"
	I0414 13:50:31.559821 1234466 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 13:50:31.559887 1234466 start.go:340] cluster config:
	{Name:bridge-734713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-734713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:50:31.560014 1234466 iso.go:125] acquiring lock: {Name:mk8f809cb461c81c5ffa481f4cedc4ad92252720 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 13:50:31.562179 1234466 out.go:177] * Starting "bridge-734713" primary control-plane node in "bridge-734713" cluster
	I0414 13:50:29.334946 1231023 pod_ready.go:103] pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace has status "Ready":"False"
	I0414 13:50:31.833321 1231023 pod_ready.go:98] pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:31 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.69 HostIPs:[{IP:192.168.39.
69}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 13:50:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 13:50:21 +0000 UTC,FinishedAt:2025-04-14 13:50:31 +0000 UTC,ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3 Started:0xc000545230 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0007929a0} {Name:kube-api-access-gz8ls MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0007929b0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 13:50:31.833367 1231023 pod_ready.go:82] duration metric: took 12.006367856s for pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace to be "Ready" ...
	E0414 13:50:31.833383 1231023 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-gffx9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:31 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:50:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.69 HostIPs:[{IP:192.168.39.69}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 13:50:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 13:50:21 +0000 UTC,FinishedAt:2025-04-14 13:50:31 +0000 UTC,ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://5138b7cb6eef39e34725d30cbaef651cdf729aa568144ef8266db07beeb2d6f3 Started:0xc000545230 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0007929a0} {Name:kube-api-access-gz8ls MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0007929b0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 13:50:31.833400 1231023 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-wc7z8" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.838889 1231023 pod_ready.go:93] pod "coredns-668d6bf9bc-wc7z8" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.838917 1231023 pod_ready.go:82] duration metric: took 5.507401ms for pod "coredns-668d6bf9bc-wc7z8" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.838931 1231023 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.846654 1231023 pod_ready.go:93] pod "etcd-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.846680 1231023 pod_ready.go:82] duration metric: took 7.739982ms for pod "etcd-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.846693 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.851573 1231023 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.851599 1231023 pod_ready.go:82] duration metric: took 4.900716ms for pod "kube-apiserver-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.851610 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.861178 1231023 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:31.861205 1231023 pod_ready.go:82] duration metric: took 9.588121ms for pod "kube-controller-manager-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:31.861215 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-9w89x" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.231339 1231023 pod_ready.go:93] pod "kube-proxy-9w89x" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:32.231363 1231023 pod_ready.go:82] duration metric: took 370.139759ms for pod "kube-proxy-9w89x" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.231373 1231023 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.630989 1231023 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:50:32.631015 1231023 pod_ready.go:82] duration metric: took 399.636056ms for pod "kube-scheduler-enable-default-cni-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:50:32.631024 1231023 pod_ready.go:39] duration metric: took 12.810229756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:50:32.631043 1231023 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:50:32.631107 1231023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:50:32.645651 1231023 api_server.go:72] duration metric: took 13.222143925s to wait for apiserver process to appear ...
	I0414 13:50:32.645687 1231023 api_server.go:88] waiting for apiserver healthz status ...
	I0414 13:50:32.645709 1231023 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I0414 13:50:32.651253 1231023 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I0414 13:50:32.652492 1231023 api_server.go:141] control plane version: v1.32.2
	I0414 13:50:32.652525 1231023 api_server.go:131] duration metric: took 6.829312ms to wait for apiserver health ...
	I0414 13:50:32.652539 1231023 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 13:50:32.832476 1231023 system_pods.go:59] 7 kube-system pods found
	I0414 13:50:32.832516 1231023 system_pods.go:61] "coredns-668d6bf9bc-wc7z8" [0f79f746-c943-4e60-a284-492bb981e61f] Running
	I0414 13:50:32.832522 1231023 system_pods.go:61] "etcd-enable-default-cni-734713" [19510a8c-d3ce-4d2d-ae16-913bbdf644aa] Running
	I0414 13:50:32.832527 1231023 system_pods.go:61] "kube-apiserver-enable-default-cni-734713" [adb74c7b-1709-4a20-b0af-e71ceff39a2c] Running
	I0414 13:50:32.832531 1231023 system_pods.go:61] "kube-controller-manager-enable-default-cni-734713" [1ac3ccc9-9356-4859-a021-41a7adf3620d] Running
	I0414 13:50:32.832534 1231023 system_pods.go:61] "kube-proxy-9w89x" [ced87d7f-cfc0-4474-bbff-273bf081d028] Running
	I0414 13:50:32.832539 1231023 system_pods.go:61] "kube-scheduler-enable-default-cni-734713" [5be3eb8d-6845-4526-b0bd-e870fa09ab3d] Running
	I0414 13:50:32.832542 1231023 system_pods.go:61] "storage-provisioner" [a321a108-c3a8-47b6-bfaa-c32b99f04b1e] Running
	I0414 13:50:32.832548 1231023 system_pods.go:74] duration metric: took 180.003646ms to wait for pod list to return data ...
	I0414 13:50:32.832556 1231023 default_sa.go:34] waiting for default service account to be created ...
	I0414 13:50:33.031788 1231023 default_sa.go:45] found service account: "default"
	I0414 13:50:33.031827 1231023 default_sa.go:55] duration metric: took 199.260003ms for default service account to be created ...
	I0414 13:50:33.031842 1231023 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 13:50:33.232342 1231023 system_pods.go:86] 7 kube-system pods found
	I0414 13:50:33.232377 1231023 system_pods.go:89] "coredns-668d6bf9bc-wc7z8" [0f79f746-c943-4e60-a284-492bb981e61f] Running
	I0414 13:50:33.232383 1231023 system_pods.go:89] "etcd-enable-default-cni-734713" [19510a8c-d3ce-4d2d-ae16-913bbdf644aa] Running
	I0414 13:50:33.232387 1231023 system_pods.go:89] "kube-apiserver-enable-default-cni-734713" [adb74c7b-1709-4a20-b0af-e71ceff39a2c] Running
	I0414 13:50:33.232391 1231023 system_pods.go:89] "kube-controller-manager-enable-default-cni-734713" [1ac3ccc9-9356-4859-a021-41a7adf3620d] Running
	I0414 13:50:33.232395 1231023 system_pods.go:89] "kube-proxy-9w89x" [ced87d7f-cfc0-4474-bbff-273bf081d028] Running
	I0414 13:50:33.232399 1231023 system_pods.go:89] "kube-scheduler-enable-default-cni-734713" [5be3eb8d-6845-4526-b0bd-e870fa09ab3d] Running
	I0414 13:50:33.232402 1231023 system_pods.go:89] "storage-provisioner" [a321a108-c3a8-47b6-bfaa-c32b99f04b1e] Running
	I0414 13:50:33.232408 1231023 system_pods.go:126] duration metric: took 200.561466ms to wait for k8s-apps to be running ...
	I0414 13:50:33.232415 1231023 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 13:50:33.232464 1231023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:50:33.247725 1231023 system_svc.go:56] duration metric: took 15.294005ms WaitForService to wait for kubelet
	I0414 13:50:33.247763 1231023 kubeadm.go:582] duration metric: took 13.824265507s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:50:33.247796 1231023 node_conditions.go:102] verifying NodePressure condition ...
	I0414 13:50:33.431949 1231023 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 13:50:33.431992 1231023 node_conditions.go:123] node cpu capacity is 2
	I0414 13:50:33.432010 1231023 node_conditions.go:105] duration metric: took 184.207615ms to run NodePressure ...
	I0414 13:50:33.432026 1231023 start.go:241] waiting for startup goroutines ...
	I0414 13:50:33.432036 1231023 start.go:246] waiting for cluster config update ...
	I0414 13:50:33.432076 1231023 start.go:255] writing updated cluster config ...
	I0414 13:50:33.432420 1231023 ssh_runner.go:195] Run: rm -f paused
	I0414 13:50:33.493793 1231023 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 13:50:33.496245 1231023 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-734713" cluster and "default" namespace by default
	I0414 13:50:29.976954 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:29.978228 1232896 main.go:141] libmachine: (flannel-734713) DBG | unable to find current IP address of domain flannel-734713 in network mk-flannel-734713
	I0414 13:50:29.978298 1232896 main.go:141] libmachine: (flannel-734713) DBG | I0414 13:50:29.977978 1232920 retry.go:31] will retry after 3.36970346s: waiting for domain to come up
	I0414 13:50:33.352083 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:33.352787 1232896 main.go:141] libmachine: (flannel-734713) DBG | unable to find current IP address of domain flannel-734713 in network mk-flannel-734713
	I0414 13:50:33.352813 1232896 main.go:141] libmachine: (flannel-734713) DBG | I0414 13:50:33.352721 1232920 retry.go:31] will retry after 4.281011349s: waiting for domain to come up
	I0414 13:50:31.563813 1234466 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 13:50:31.563891 1234466 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 13:50:31.563915 1234466 cache.go:56] Caching tarball of preloaded images
	I0414 13:50:31.564056 1234466 preload.go:172] Found /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 13:50:31.564078 1234466 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 13:50:31.564242 1234466 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/config.json ...
	I0414 13:50:31.564277 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/config.json: {Name:mk2204b108f022f99d564aa50c55629979eef512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:31.564486 1234466 start.go:360] acquireMachinesLock for bridge-734713: {Name:mk1c744e856a4885e6214d755048926590b4b12b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 13:50:37.637020 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.637866 1232896 main.go:141] libmachine: (flannel-734713) found domain IP: 192.168.72.152
	I0414 13:50:37.637896 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has current primary IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.637901 1232896 main.go:141] libmachine: (flannel-734713) reserving static IP address...
	I0414 13:50:37.638450 1232896 main.go:141] libmachine: (flannel-734713) DBG | unable to find host DHCP lease matching {name: "flannel-734713", mac: "52:54:00:9e:9a:48", ip: "192.168.72.152"} in network mk-flannel-734713
	I0414 13:50:37.754591 1232896 main.go:141] libmachine: (flannel-734713) DBG | Getting to WaitForSSH function...
	I0414 13:50:37.754621 1232896 main.go:141] libmachine: (flannel-734713) reserved static IP address 192.168.72.152 for domain flannel-734713
	I0414 13:50:37.754634 1232896 main.go:141] libmachine: (flannel-734713) waiting for SSH...
	I0414 13:50:37.758535 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.759318 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:37.759361 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.759550 1232896 main.go:141] libmachine: (flannel-734713) DBG | Using SSH client type: external
	I0414 13:50:37.759582 1232896 main.go:141] libmachine: (flannel-734713) DBG | Using SSH private key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa (-rw-------)
	I0414 13:50:37.759619 1232896 main.go:141] libmachine: (flannel-734713) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 13:50:37.759640 1232896 main.go:141] libmachine: (flannel-734713) DBG | About to run SSH command:
	I0414 13:50:37.759648 1232896 main.go:141] libmachine: (flannel-734713) DBG | exit 0
	I0414 13:50:37.892219 1232896 main.go:141] libmachine: (flannel-734713) DBG | SSH cmd err, output: <nil>: 
	I0414 13:50:37.892578 1232896 main.go:141] libmachine: (flannel-734713) KVM machine creation complete
	I0414 13:50:37.892927 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetConfigRaw
	I0414 13:50:37.893493 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:37.893697 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:37.893915 1232896 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 13:50:37.893934 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetState
	I0414 13:50:37.895553 1232896 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 13:50:37.895573 1232896 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 13:50:37.895581 1232896 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 13:50:37.895590 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:37.899289 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.899748 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:37.899782 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:37.900093 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:37.900351 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:37.900554 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:37.900695 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:37.900911 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:37.901234 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:37.901251 1232896 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 13:50:38.015885 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:50:38.015916 1232896 main.go:141] libmachine: Detecting the provisioner...
	I0414 13:50:38.015928 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.019947 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.020401 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.020434 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.020711 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.021012 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.021325 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.021507 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.021832 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.022086 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.022100 1232896 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 13:50:38.136844 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 13:50:38.136924 1232896 main.go:141] libmachine: found compatible host: buildroot
	I0414 13:50:38.136932 1232896 main.go:141] libmachine: Provisioning with buildroot...
	I0414 13:50:38.136941 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetMachineName
	I0414 13:50:38.137270 1232896 buildroot.go:166] provisioning hostname "flannel-734713"
	I0414 13:50:38.137308 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetMachineName
	I0414 13:50:38.137547 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.141614 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.142106 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.142144 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.142427 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.142670 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.142873 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.143110 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.143324 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.143622 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.143683 1232896 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-734713 && echo "flannel-734713" | sudo tee /etc/hostname
	I0414 13:50:38.270664 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-734713
	
	I0414 13:50:38.270700 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.274038 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.274480 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.274509 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.274796 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.275053 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.275214 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.275388 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.275567 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.275847 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.275876 1232896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-734713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-734713/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-734713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:50:38.401361 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:50:38.401397 1232896 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20384-1167927/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-1167927/.minikube}
	I0414 13:50:38.401429 1232896 buildroot.go:174] setting up certificates
	I0414 13:50:38.401441 1232896 provision.go:84] configureAuth start
	I0414 13:50:38.401451 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetMachineName
	I0414 13:50:38.401767 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:38.404744 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.405311 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.405344 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.405588 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.408468 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.408941 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.408973 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.409159 1232896 provision.go:143] copyHostCerts
	I0414 13:50:38.409231 1232896 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem, removing ...
	I0414 13:50:38.409255 1232896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem
	I0414 13:50:38.409353 1232896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem (1078 bytes)
	I0414 13:50:38.409483 1232896 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem, removing ...
	I0414 13:50:38.409494 1232896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem
	I0414 13:50:38.409521 1232896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem (1123 bytes)
	I0414 13:50:38.409584 1232896 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem, removing ...
	I0414 13:50:38.409592 1232896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem
	I0414 13:50:38.409616 1232896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem (1675 bytes)
	I0414 13:50:38.409667 1232896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem org=jenkins.flannel-734713 san=[127.0.0.1 192.168.72.152 flannel-734713 localhost minikube]
	I0414 13:50:38.622027 1232896 provision.go:177] copyRemoteCerts
	I0414 13:50:38.622101 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:50:38.622129 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.625644 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.626308 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.626341 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.626672 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.626943 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.627175 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.627360 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:38.717313 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:50:38.746438 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0414 13:50:38.774217 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 13:50:38.800993 1232896 provision.go:87] duration metric: took 399.533017ms to configureAuth
	I0414 13:50:38.801037 1232896 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:50:38.801286 1232896 config.go:182] Loaded profile config "flannel-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:50:38.801390 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:38.804612 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.805077 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:38.805108 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:38.805235 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:38.805516 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.805686 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:38.805838 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:38.806026 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:38.806227 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:38.806245 1232896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:50:39.047256 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:50:39.047322 1232896 main.go:141] libmachine: Checking connection to Docker...
	I0414 13:50:39.047335 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetURL
	I0414 13:50:39.049101 1232896 main.go:141] libmachine: (flannel-734713) DBG | using libvirt version 6000000
	I0414 13:50:39.052133 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.052668 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.052706 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.052895 1232896 main.go:141] libmachine: Docker is up and running!
	I0414 13:50:39.052917 1232896 main.go:141] libmachine: Reticulating splines...
	I0414 13:50:39.052927 1232896 client.go:171] duration metric: took 24.751714339s to LocalClient.Create
	I0414 13:50:39.052964 1232896 start.go:167] duration metric: took 24.751802794s to libmachine.API.Create "flannel-734713"
	I0414 13:50:39.052977 1232896 start.go:293] postStartSetup for "flannel-734713" (driver="kvm2")
	I0414 13:50:39.052993 1232896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:50:39.053021 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.053344 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:50:39.053380 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.056234 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.056651 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.056683 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.056948 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.057181 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.057386 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.057603 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:39.147363 1232896 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:50:39.152531 1232896 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:50:39.152565 1232896 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/addons for local assets ...
	I0414 13:50:39.152666 1232896 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/files for local assets ...
	I0414 13:50:39.152797 1232896 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem -> 11757462.pem in /etc/ssl/certs
	I0414 13:50:39.152913 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:50:39.163686 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:50:39.313206 1234466 start.go:364] duration metric: took 7.7486593s to acquireMachinesLock for "bridge-734713"
	I0414 13:50:39.313286 1234466 start.go:93] Provisioning new machine with config: &{Name:bridge-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-734713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:50:39.313465 1234466 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 13:50:39.191538 1232896 start.go:296] duration metric: took 138.53561ms for postStartSetup
	I0414 13:50:39.191625 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetConfigRaw
	I0414 13:50:39.192675 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:39.195982 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.196370 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.196403 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.196711 1232896 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/config.json ...
	I0414 13:50:39.196933 1232896 start.go:128] duration metric: took 24.920662395s to createHost
	I0414 13:50:39.196962 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.199575 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.199971 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.200009 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.200265 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.200506 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.200716 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.200836 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.201017 1232896 main.go:141] libmachine: Using SSH client type: native
	I0414 13:50:39.201235 1232896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0414 13:50:39.201252 1232896 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:50:39.312992 1232896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744638639.294895113
	
	I0414 13:50:39.313027 1232896 fix.go:216] guest clock: 1744638639.294895113
	I0414 13:50:39.313040 1232896 fix.go:229] Guest: 2025-04-14 13:50:39.294895113 +0000 UTC Remote: 2025-04-14 13:50:39.196948569 +0000 UTC m=+25.069092558 (delta=97.946544ms)
	I0414 13:50:39.313076 1232896 fix.go:200] guest clock delta is within tolerance: 97.946544ms
	I0414 13:50:39.313084 1232896 start.go:83] releasing machines lock for "flannel-734713", held for 25.03689115s
	I0414 13:50:39.313123 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.313495 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:39.316913 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.317374 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.317407 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.317648 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.318454 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.318745 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:50:39.318848 1232896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:50:39.318899 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.319070 1232896 ssh_runner.go:195] Run: cat /version.json
	I0414 13:50:39.319106 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:50:39.322650 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.322688 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.323095 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.323127 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.323152 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:39.323175 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:39.323347 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.323526 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:50:39.323628 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.323746 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:50:39.323868 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.323943 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:50:39.324058 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:39.324098 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:50:39.429639 1232896 ssh_runner.go:195] Run: systemctl --version
	I0414 13:50:39.437143 1232896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:50:39.610972 1232896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:50:39.618687 1232896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:50:39.618790 1232896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:50:39.638285 1232896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 13:50:39.638323 1232896 start.go:495] detecting cgroup driver to use...
	I0414 13:50:39.638408 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:50:39.657548 1232896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:50:39.672881 1232896 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:50:39.672968 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:50:39.688263 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:50:39.705072 1232896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:50:39.850153 1232896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:50:40.021507 1232896 docker.go:233] disabling docker service ...
	I0414 13:50:40.021590 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:50:40.039954 1232896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:50:40.056200 1232896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:50:40.200612 1232896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:50:40.323358 1232896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:50:40.339938 1232896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:50:40.362920 1232896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 13:50:40.363030 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.377645 1232896 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:50:40.377729 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.389966 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.401671 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.412575 1232896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:50:40.424261 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.436885 1232896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.458584 1232896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:50:40.471169 1232896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:50:40.481527 1232896 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 13:50:40.481614 1232896 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 13:50:40.494853 1232896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:50:40.509112 1232896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:50:40.638794 1232896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:50:40.738537 1232896 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:50:40.738626 1232896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:50:40.744588 1232896 start.go:563] Will wait 60s for crictl version
	I0414 13:50:40.744653 1232896 ssh_runner.go:195] Run: which crictl
	I0414 13:50:40.749602 1232896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:50:40.793798 1232896 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:50:40.793927 1232896 ssh_runner.go:195] Run: crio --version
	I0414 13:50:40.827010 1232896 ssh_runner.go:195] Run: crio --version
	I0414 13:50:40.862877 1232896 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 13:50:39.315866 1234466 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0414 13:50:39.316102 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:50:39.316179 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:50:39.339239 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I0414 13:50:39.340033 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:50:39.340744 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:50:39.340773 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:50:39.341299 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:50:39.341627 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetMachineName
	I0414 13:50:39.341844 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:50:39.342129 1234466 start.go:159] libmachine.API.Create for "bridge-734713" (driver="kvm2")
	I0414 13:50:39.342175 1234466 client.go:168] LocalClient.Create starting
	I0414 13:50:39.342222 1234466 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem
	I0414 13:50:39.342289 1234466 main.go:141] libmachine: Decoding PEM data...
	I0414 13:50:39.342312 1234466 main.go:141] libmachine: Parsing certificate...
	I0414 13:50:39.342402 1234466 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem
	I0414 13:50:39.342433 1234466 main.go:141] libmachine: Decoding PEM data...
	I0414 13:50:39.342446 1234466 main.go:141] libmachine: Parsing certificate...
	I0414 13:50:39.342470 1234466 main.go:141] libmachine: Running pre-create checks...
	I0414 13:50:39.342485 1234466 main.go:141] libmachine: (bridge-734713) Calling .PreCreateCheck
	I0414 13:50:39.342957 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetConfigRaw
	I0414 13:50:39.343643 1234466 main.go:141] libmachine: Creating machine...
	I0414 13:50:39.343685 1234466 main.go:141] libmachine: (bridge-734713) Calling .Create
	I0414 13:50:39.343972 1234466 main.go:141] libmachine: (bridge-734713) creating KVM machine...
	I0414 13:50:39.343990 1234466 main.go:141] libmachine: (bridge-734713) creating network...
	I0414 13:50:39.345635 1234466 main.go:141] libmachine: (bridge-734713) DBG | found existing default KVM network
	I0414 13:50:39.347502 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.347251 1234612 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:76:3b:72} reservation:<nil>}
	I0414 13:50:39.349222 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.349053 1234612 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000340000}
	I0414 13:50:39.349267 1234466 main.go:141] libmachine: (bridge-734713) DBG | created network xml: 
	I0414 13:50:39.349280 1234466 main.go:141] libmachine: (bridge-734713) DBG | <network>
	I0414 13:50:39.349301 1234466 main.go:141] libmachine: (bridge-734713) DBG |   <name>mk-bridge-734713</name>
	I0414 13:50:39.349317 1234466 main.go:141] libmachine: (bridge-734713) DBG |   <dns enable='no'/>
	I0414 13:50:39.349324 1234466 main.go:141] libmachine: (bridge-734713) DBG |   
	I0414 13:50:39.349332 1234466 main.go:141] libmachine: (bridge-734713) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0414 13:50:39.349338 1234466 main.go:141] libmachine: (bridge-734713) DBG |     <dhcp>
	I0414 13:50:39.349349 1234466 main.go:141] libmachine: (bridge-734713) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0414 13:50:39.349369 1234466 main.go:141] libmachine: (bridge-734713) DBG |     </dhcp>
	I0414 13:50:39.349383 1234466 main.go:141] libmachine: (bridge-734713) DBG |   </ip>
	I0414 13:50:39.349388 1234466 main.go:141] libmachine: (bridge-734713) DBG |   
	I0414 13:50:39.349396 1234466 main.go:141] libmachine: (bridge-734713) DBG | </network>
	I0414 13:50:39.349401 1234466 main.go:141] libmachine: (bridge-734713) DBG | 
	I0414 13:50:39.356260 1234466 main.go:141] libmachine: (bridge-734713) DBG | trying to create private KVM network mk-bridge-734713 192.168.50.0/24...
	I0414 13:50:39.446944 1234466 main.go:141] libmachine: (bridge-734713) setting up store path in /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713 ...
	I0414 13:50:39.446981 1234466 main.go:141] libmachine: (bridge-734713) DBG | private KVM network mk-bridge-734713 192.168.50.0/24 created
	I0414 13:50:39.446995 1234466 main.go:141] libmachine: (bridge-734713) building disk image from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 13:50:39.447018 1234466 main.go:141] libmachine: (bridge-734713) Downloading /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 13:50:39.447037 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.446840 1234612 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:50:39.775963 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:39.775805 1234612 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa...
	I0414 13:50:40.534757 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:40.534576 1234612 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/bridge-734713.rawdisk...
	I0414 13:50:40.534793 1234466 main.go:141] libmachine: (bridge-734713) DBG | Writing magic tar header
	I0414 13:50:40.534805 1234466 main.go:141] libmachine: (bridge-734713) DBG | Writing SSH key tar header
	I0414 13:50:40.534812 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:40.534739 1234612 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713 ...
	I0414 13:50:40.535006 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713
	I0414 13:50:40.535061 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines
	I0414 13:50:40.535073 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713 (perms=drwx------)
	I0414 13:50:40.535087 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:50:40.535139 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube/machines (perms=drwxr-xr-x)
	I0414 13:50:40.535174 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20384-1167927
	I0414 13:50:40.535184 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927/.minikube (perms=drwxr-xr-x)
	I0414 13:50:40.535195 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration/20384-1167927 (perms=drwxrwxr-x)
	I0414 13:50:40.535207 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 13:50:40.535236 1234466 main.go:141] libmachine: (bridge-734713) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 13:50:40.535247 1234466 main.go:141] libmachine: (bridge-734713) creating domain...
	I0414 13:50:40.535282 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 13:50:40.535298 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home/jenkins
	I0414 13:50:40.535312 1234466 main.go:141] libmachine: (bridge-734713) DBG | checking permissions on dir: /home
	I0414 13:50:40.535322 1234466 main.go:141] libmachine: (bridge-734713) DBG | skipping /home - not owner
	I0414 13:50:40.536640 1234466 main.go:141] libmachine: (bridge-734713) define libvirt domain using xml: 
	I0414 13:50:40.536667 1234466 main.go:141] libmachine: (bridge-734713) <domain type='kvm'>
	I0414 13:50:40.536679 1234466 main.go:141] libmachine: (bridge-734713)   <name>bridge-734713</name>
	I0414 13:50:40.536689 1234466 main.go:141] libmachine: (bridge-734713)   <memory unit='MiB'>3072</memory>
	I0414 13:50:40.536699 1234466 main.go:141] libmachine: (bridge-734713)   <vcpu>2</vcpu>
	I0414 13:50:40.536707 1234466 main.go:141] libmachine: (bridge-734713)   <features>
	I0414 13:50:40.536717 1234466 main.go:141] libmachine: (bridge-734713)     <acpi/>
	I0414 13:50:40.536734 1234466 main.go:141] libmachine: (bridge-734713)     <apic/>
	I0414 13:50:40.536745 1234466 main.go:141] libmachine: (bridge-734713)     <pae/>
	I0414 13:50:40.536752 1234466 main.go:141] libmachine: (bridge-734713)     
	I0414 13:50:40.536760 1234466 main.go:141] libmachine: (bridge-734713)   </features>
	I0414 13:50:40.536769 1234466 main.go:141] libmachine: (bridge-734713)   <cpu mode='host-passthrough'>
	I0414 13:50:40.536779 1234466 main.go:141] libmachine: (bridge-734713)   
	I0414 13:50:40.536788 1234466 main.go:141] libmachine: (bridge-734713)   </cpu>
	I0414 13:50:40.536799 1234466 main.go:141] libmachine: (bridge-734713)   <os>
	I0414 13:50:40.536808 1234466 main.go:141] libmachine: (bridge-734713)     <type>hvm</type>
	I0414 13:50:40.536821 1234466 main.go:141] libmachine: (bridge-734713)     <boot dev='cdrom'/>
	I0414 13:50:40.536830 1234466 main.go:141] libmachine: (bridge-734713)     <boot dev='hd'/>
	I0414 13:50:40.536842 1234466 main.go:141] libmachine: (bridge-734713)     <bootmenu enable='no'/>
	I0414 13:50:40.536847 1234466 main.go:141] libmachine: (bridge-734713)   </os>
	I0414 13:50:40.536855 1234466 main.go:141] libmachine: (bridge-734713)   <devices>
	I0414 13:50:40.536862 1234466 main.go:141] libmachine: (bridge-734713)     <disk type='file' device='cdrom'>
	I0414 13:50:40.536880 1234466 main.go:141] libmachine: (bridge-734713)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/boot2docker.iso'/>
	I0414 13:50:40.536890 1234466 main.go:141] libmachine: (bridge-734713)       <target dev='hdc' bus='scsi'/>
	I0414 13:50:40.536897 1234466 main.go:141] libmachine: (bridge-734713)       <readonly/>
	I0414 13:50:40.536906 1234466 main.go:141] libmachine: (bridge-734713)     </disk>
	I0414 13:50:40.536920 1234466 main.go:141] libmachine: (bridge-734713)     <disk type='file' device='disk'>
	I0414 13:50:40.536933 1234466 main.go:141] libmachine: (bridge-734713)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 13:50:40.536948 1234466 main.go:141] libmachine: (bridge-734713)       <source file='/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/bridge-734713.rawdisk'/>
	I0414 13:50:40.536958 1234466 main.go:141] libmachine: (bridge-734713)       <target dev='hda' bus='virtio'/>
	I0414 13:50:40.536973 1234466 main.go:141] libmachine: (bridge-734713)     </disk>
	I0414 13:50:40.536983 1234466 main.go:141] libmachine: (bridge-734713)     <interface type='network'>
	I0414 13:50:40.536990 1234466 main.go:141] libmachine: (bridge-734713)       <source network='mk-bridge-734713'/>
	I0414 13:50:40.537006 1234466 main.go:141] libmachine: (bridge-734713)       <model type='virtio'/>
	I0414 13:50:40.537018 1234466 main.go:141] libmachine: (bridge-734713)     </interface>
	I0414 13:50:40.537028 1234466 main.go:141] libmachine: (bridge-734713)     <interface type='network'>
	I0414 13:50:40.537036 1234466 main.go:141] libmachine: (bridge-734713)       <source network='default'/>
	I0414 13:50:40.537045 1234466 main.go:141] libmachine: (bridge-734713)       <model type='virtio'/>
	I0414 13:50:40.537053 1234466 main.go:141] libmachine: (bridge-734713)     </interface>
	I0414 13:50:40.537063 1234466 main.go:141] libmachine: (bridge-734713)     <serial type='pty'>
	I0414 13:50:40.537071 1234466 main.go:141] libmachine: (bridge-734713)       <target port='0'/>
	I0414 13:50:40.537082 1234466 main.go:141] libmachine: (bridge-734713)     </serial>
	I0414 13:50:40.537094 1234466 main.go:141] libmachine: (bridge-734713)     <console type='pty'>
	I0414 13:50:40.537105 1234466 main.go:141] libmachine: (bridge-734713)       <target type='serial' port='0'/>
	I0414 13:50:40.537112 1234466 main.go:141] libmachine: (bridge-734713)     </console>
	I0414 13:50:40.537121 1234466 main.go:141] libmachine: (bridge-734713)     <rng model='virtio'>
	I0414 13:50:40.537129 1234466 main.go:141] libmachine: (bridge-734713)       <backend model='random'>/dev/random</backend>
	I0414 13:50:40.537138 1234466 main.go:141] libmachine: (bridge-734713)     </rng>
	I0414 13:50:40.537146 1234466 main.go:141] libmachine: (bridge-734713)     
	I0414 13:50:40.537151 1234466 main.go:141] libmachine: (bridge-734713)     
	I0414 13:50:40.537158 1234466 main.go:141] libmachine: (bridge-734713)   </devices>
	I0414 13:50:40.537168 1234466 main.go:141] libmachine: (bridge-734713) </domain>
	I0414 13:50:40.537180 1234466 main.go:141] libmachine: (bridge-734713) 
	I0414 13:50:40.542293 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:a2:c5:c3 in network default
	I0414 13:50:40.543155 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:40.543181 1234466 main.go:141] libmachine: (bridge-734713) starting domain...
	I0414 13:50:40.543192 1234466 main.go:141] libmachine: (bridge-734713) ensuring networks are active...
	I0414 13:50:40.544085 1234466 main.go:141] libmachine: (bridge-734713) Ensuring network default is active
	I0414 13:50:40.544503 1234466 main.go:141] libmachine: (bridge-734713) Ensuring network mk-bridge-734713 is active
	I0414 13:50:40.545224 1234466 main.go:141] libmachine: (bridge-734713) getting domain XML...
	I0414 13:50:40.546220 1234466 main.go:141] libmachine: (bridge-734713) creating domain...
	I0414 13:50:40.864611 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetIP
	I0414 13:50:40.872238 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:40.872889 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:50:40.872945 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:50:40.873296 1232896 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 13:50:40.878238 1232896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:50:40.893229 1232896 kubeadm.go:883] updating cluster {Name:flannel-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-734713
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:50:40.893411 1232896 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 13:50:40.893473 1232896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:50:40.927159 1232896 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 13:50:40.927257 1232896 ssh_runner.go:195] Run: which lz4
	I0414 13:50:40.931385 1232896 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:50:40.935992 1232896 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:50:40.936041 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 13:50:42.508560 1232896 crio.go:462] duration metric: took 1.577211857s to copy over tarball
	I0414 13:50:42.508755 1232896 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:50:42.186131 1234466 main.go:141] libmachine: (bridge-734713) waiting for IP...
	I0414 13:50:42.187150 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:42.187900 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:42.188021 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:42.187885 1234612 retry.go:31] will retry after 209.280153ms: waiting for domain to come up
	I0414 13:50:42.400953 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:42.401780 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:42.401812 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:42.401758 1234612 retry.go:31] will retry after 258.587195ms: waiting for domain to come up
	I0414 13:50:42.662535 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:42.663254 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:42.663301 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:42.663223 1234612 retry.go:31] will retry after 447.059078ms: waiting for domain to come up
	I0414 13:50:43.112050 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:43.112698 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:43.112729 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:43.112651 1234612 retry.go:31] will retry after 509.754419ms: waiting for domain to come up
	I0414 13:50:43.624778 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:43.625482 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:43.625535 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:43.625448 1234612 retry.go:31] will retry after 623.011152ms: waiting for domain to come up
	I0414 13:50:44.250093 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:44.250644 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:44.250686 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:44.250534 1234612 retry.go:31] will retry after 764.557829ms: waiting for domain to come up
	I0414 13:50:45.017538 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:45.018426 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:45.018451 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:45.018313 1234612 retry.go:31] will retry after 968.96203ms: waiting for domain to come up
	I0414 13:50:45.989225 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:45.990298 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:45.990328 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:45.990229 1234612 retry.go:31] will retry after 918.990856ms: waiting for domain to come up
	I0414 13:50:48.524155 1223410 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 13:50:48.524328 1223410 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 13:50:48.525904 1223410 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 13:50:48.525995 1223410 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:50:48.526105 1223410 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:50:48.526269 1223410 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:50:48.526421 1223410 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 13:50:48.526514 1223410 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:50:48.528418 1223410 out.go:235]   - Generating certificates and keys ...
	I0414 13:50:48.528530 1223410 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:50:48.528624 1223410 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:50:48.528765 1223410 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 13:50:48.528871 1223410 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 13:50:48.528983 1223410 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 13:50:48.529064 1223410 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 13:50:48.529155 1223410 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 13:50:48.529254 1223410 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 13:50:48.529417 1223410 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 13:50:48.529560 1223410 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 13:50:48.529604 1223410 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 13:50:48.529704 1223410 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:50:48.529789 1223410 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:50:48.529839 1223410 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:50:48.529919 1223410 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:50:48.530000 1223410 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:50:48.530167 1223410 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:50:48.530286 1223410 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:50:48.530362 1223410 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:50:48.530461 1223410 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:50:45.310453 1232896 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.801647363s)
	I0414 13:50:45.310493 1232896 crio.go:469] duration metric: took 2.801895924s to extract the tarball
	I0414 13:50:45.310504 1232896 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:50:45.356652 1232896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:50:45.406626 1232896 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 13:50:45.406661 1232896 cache_images.go:84] Images are preloaded, skipping loading
	I0414 13:50:45.406670 1232896 kubeadm.go:934] updating node { 192.168.72.152 8443 v1.32.2 crio true true} ...
	I0414 13:50:45.406815 1232896 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-734713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-734713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0414 13:50:45.406909 1232896 ssh_runner.go:195] Run: crio config
	I0414 13:50:45.461461 1232896 cni.go:84] Creating CNI manager for "flannel"
	I0414 13:50:45.461488 1232896 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:50:45.461513 1232896 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.152 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-734713 NodeName:flannel-734713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 13:50:45.461635 1232896 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-734713"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.152"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.152"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:50:45.461707 1232896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 13:50:45.473005 1232896 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:50:45.473087 1232896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:50:45.485045 1232896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0414 13:50:45.506983 1232896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:50:45.530421 1232896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0414 13:50:45.555515 1232896 ssh_runner.go:195] Run: grep 192.168.72.152	control-plane.minikube.internal$ /etc/hosts
	I0414 13:50:45.560505 1232896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.152	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:50:45.575551 1232896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:50:45.720221 1232896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:50:45.747368 1232896 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713 for IP: 192.168.72.152
	I0414 13:50:45.747403 1232896 certs.go:194] generating shared ca certs ...
	I0414 13:50:45.747430 1232896 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:45.747707 1232896 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 13:50:45.747811 1232896 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 13:50:45.747835 1232896 certs.go:256] generating profile certs ...
	I0414 13:50:45.747918 1232896 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.key
	I0414 13:50:45.747937 1232896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt with IP's: []
	I0414 13:50:45.922380 1232896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt ...
	I0414 13:50:45.922422 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.crt: {Name:mk1de736571f3f8c7d352cc6b2b670d2f7a3f166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:45.922639 1232896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.key ...
	I0414 13:50:45.922655 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/client.key: {Name:mk2f6eadeffd7c817852e1cf122fbd49307e71e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:45.922754 1232896 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f
	I0414 13:50:45.922778 1232896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.152]
	I0414 13:50:46.110029 1232896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f ...
	I0414 13:50:46.110069 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f: {Name:mk87d3d3dae2adc62fb3b924b2cc7bd153bf0895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.110292 1232896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f ...
	I0414 13:50:46.110311 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f: {Name:mka9b50ff1a0a846f6aad9c4e3e0e6a306ad6a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.110419 1232896 certs.go:381] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt.b8b4f66f -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt
	I0414 13:50:46.110518 1232896 certs.go:385] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key.b8b4f66f -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key
	I0414 13:50:46.110597 1232896 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key
	I0414 13:50:46.110628 1232896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt with IP's: []
	I0414 13:50:46.456186 1232896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt ...
	I0414 13:50:46.456227 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt: {Name:mkfb4bf081c9c81523a9ab1a930bbd9a48e04eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.456431 1232896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key ...
	I0414 13:50:46.456457 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key: {Name:mk555b1a424c2e8f038bac59e6d58cf02d051438 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:50:46.456681 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem (1338 bytes)
	W0414 13:50:46.456722 1232896 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746_empty.pem, impossibly tiny 0 bytes
	I0414 13:50:46.456734 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:50:46.456765 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:50:46.456797 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:50:46.456827 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 13:50:46.456889 1232896 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:50:46.457527 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:50:46.487196 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 13:50:46.515169 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:50:46.539007 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:50:46.565526 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 13:50:46.592839 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 13:50:46.620546 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:50:46.650296 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/flannel-734713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 13:50:46.677899 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:50:46.706202 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem --> /usr/share/ca-certificates/1175746.pem (1338 bytes)
	I0414 13:50:46.734768 1232896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /usr/share/ca-certificates/11757462.pem (1708 bytes)
	I0414 13:50:46.760248 1232896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:50:46.779528 1232896 ssh_runner.go:195] Run: openssl version
	I0414 13:50:46.787229 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:50:46.799578 1232896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:50:46.805310 1232896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:50:46.805402 1232896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:50:46.812411 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:50:46.825822 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1175746.pem && ln -fs /usr/share/ca-certificates/1175746.pem /etc/ssl/certs/1175746.pem"
	I0414 13:50:46.838323 1232896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1175746.pem
	I0414 13:50:46.844286 1232896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:27 /usr/share/ca-certificates/1175746.pem
	I0414 13:50:46.844403 1232896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1175746.pem
	I0414 13:50:46.851198 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1175746.pem /etc/ssl/certs/51391683.0"
	I0414 13:50:46.864017 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11757462.pem && ln -fs /usr/share/ca-certificates/11757462.pem /etc/ssl/certs/11757462.pem"
	I0414 13:50:46.875865 1232896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11757462.pem
	I0414 13:50:46.881178 1232896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:27 /usr/share/ca-certificates/11757462.pem
	I0414 13:50:46.881257 1232896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11757462.pem
	I0414 13:50:46.890423 1232896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11757462.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:50:46.909837 1232896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:50:46.915589 1232896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 13:50:46.915675 1232896 kubeadm.go:392] StartCluster: {Name:flannel-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-734713 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:50:46.915774 1232896 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:50:46.915834 1232896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:50:46.954926 1232896 cri.go:89] found id: ""
	I0414 13:50:46.955033 1232896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:50:46.966456 1232896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:50:46.977813 1232896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:50:46.989696 1232896 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:50:46.989730 1232896 kubeadm.go:157] found existing configuration files:
	
	I0414 13:50:46.989792 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:50:47.000961 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:50:47.001039 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:50:47.012218 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:50:47.023049 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:50:47.023138 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:50:47.033513 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:50:47.045033 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:50:47.045111 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:50:47.055536 1232896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:50:47.065805 1232896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:50:47.065884 1232896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:50:47.078066 1232896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:50:47.261431 1232896 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:50:48.532385 1223410 out.go:235]   - Booting up control plane ...
	I0414 13:50:48.532556 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:50:48.532689 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:50:48.532768 1223410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:50:48.532843 1223410 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:50:48.533084 1223410 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 13:50:48.533159 1223410 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 13:50:48.533265 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.533525 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.533594 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.533814 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.533912 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534108 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534172 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534394 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534516 1223410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 13:50:48.534801 1223410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 13:50:48.534821 1223410 kubeadm.go:310] 
	I0414 13:50:48.534885 1223410 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 13:50:48.534955 1223410 kubeadm.go:310] 		timed out waiting for the condition
	I0414 13:50:48.534967 1223410 kubeadm.go:310] 
	I0414 13:50:48.535000 1223410 kubeadm.go:310] 	This error is likely caused by:
	I0414 13:50:48.535047 1223410 kubeadm.go:310] 		- The kubelet is not running
	I0414 13:50:48.535180 1223410 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 13:50:48.535194 1223410 kubeadm.go:310] 
	I0414 13:50:48.535371 1223410 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 13:50:48.535439 1223410 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 13:50:48.535500 1223410 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 13:50:48.535585 1223410 kubeadm.go:310] 
	I0414 13:50:48.535769 1223410 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 13:50:48.535905 1223410 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 13:50:48.535922 1223410 kubeadm.go:310] 
	I0414 13:50:48.536089 1223410 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 13:50:48.536225 1223410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 13:50:48.536329 1223410 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 13:50:48.536413 1223410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 13:50:48.536508 1223410 kubeadm.go:310] 
	I0414 13:50:48.536517 1223410 kubeadm.go:394] duration metric: took 8m1.284425887s to StartCluster
	I0414 13:50:48.536575 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 13:50:48.536648 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 13:50:48.585550 1223410 cri.go:89] found id: ""
	I0414 13:50:48.585590 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.585601 1223410 logs.go:284] No container was found matching "kube-apiserver"
	I0414 13:50:48.585609 1223410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 13:50:48.585672 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 13:50:48.626898 1223410 cri.go:89] found id: ""
	I0414 13:50:48.626928 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.626940 1223410 logs.go:284] No container was found matching "etcd"
	I0414 13:50:48.626948 1223410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 13:50:48.627009 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 13:50:48.670274 1223410 cri.go:89] found id: ""
	I0414 13:50:48.670317 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.670330 1223410 logs.go:284] No container was found matching "coredns"
	I0414 13:50:48.670338 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 13:50:48.670411 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 13:50:48.720563 1223410 cri.go:89] found id: ""
	I0414 13:50:48.720600 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.720611 1223410 logs.go:284] No container was found matching "kube-scheduler"
	I0414 13:50:48.720619 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 13:50:48.720686 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 13:50:48.767764 1223410 cri.go:89] found id: ""
	I0414 13:50:48.767799 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.767807 1223410 logs.go:284] No container was found matching "kube-proxy"
	I0414 13:50:48.767814 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 13:50:48.767866 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 13:50:48.818486 1223410 cri.go:89] found id: ""
	I0414 13:50:48.818531 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.818544 1223410 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 13:50:48.818553 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 13:50:48.818619 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 13:50:48.867564 1223410 cri.go:89] found id: ""
	I0414 13:50:48.867644 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.867692 1223410 logs.go:284] No container was found matching "kindnet"
	I0414 13:50:48.867706 1223410 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 13:50:48.867774 1223410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 13:50:48.906916 1223410 cri.go:89] found id: ""
	I0414 13:50:48.906950 1223410 logs.go:282] 0 containers: []
	W0414 13:50:48.906958 1223410 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 13:50:48.906971 1223410 logs.go:123] Gathering logs for container status ...
	I0414 13:50:48.906988 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 13:50:48.955626 1223410 logs.go:123] Gathering logs for kubelet ...
	I0414 13:50:48.955683 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 13:50:49.022469 1223410 logs.go:123] Gathering logs for dmesg ...
	I0414 13:50:49.022525 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 13:50:49.041402 1223410 logs.go:123] Gathering logs for describe nodes ...
	I0414 13:50:49.041449 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 13:50:49.131342 1223410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 13:50:49.131373 1223410 logs.go:123] Gathering logs for CRI-O ...
	I0414 13:50:49.131392 1223410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0414 13:50:49.248634 1223410 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 13:50:49.248726 1223410 out.go:270] * 
	W0414 13:50:49.248809 1223410 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:50:49.248828 1223410 out.go:270] * 
	W0414 13:50:49.249735 1223410 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 13:50:49.253971 1223410 out.go:201] 
	W0414 13:50:49.255696 1223410 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 13:50:49.255776 1223410 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 13:50:49.255807 1223410 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 13:50:49.257975 1223410 out.go:201] 
	I0414 13:50:46.910593 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:46.911338 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:46.911363 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:46.911230 1234612 retry.go:31] will retry after 1.155366589s: waiting for domain to come up
	I0414 13:50:48.068077 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:48.068834 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:48.068863 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:48.068780 1234612 retry.go:31] will retry after 1.700089826s: waiting for domain to come up
	I0414 13:50:49.770330 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:49.771048 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:49.771117 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:49.771036 1234612 retry.go:31] will retry after 2.036657651s: waiting for domain to come up
	I0414 13:50:51.808884 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:51.809332 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:51.809415 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:51.809319 1234612 retry.go:31] will retry after 3.172888858s: waiting for domain to come up
	I0414 13:50:54.984140 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:54.984848 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:54.984880 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:54.984820 1234612 retry.go:31] will retry after 4.057631495s: waiting for domain to come up
	I0414 13:50:58.428436 1232896 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 13:50:58.428548 1232896 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:50:58.428668 1232896 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:50:58.428807 1232896 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:50:58.428971 1232896 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 13:50:58.429065 1232896 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:50:58.431307 1232896 out.go:235]   - Generating certificates and keys ...
	I0414 13:50:58.431422 1232896 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:50:58.431510 1232896 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:50:58.431611 1232896 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 13:50:58.431695 1232896 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 13:50:58.431780 1232896 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 13:50:58.431855 1232896 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 13:50:58.431934 1232896 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 13:50:58.432114 1232896 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-734713 localhost] and IPs [192.168.72.152 127.0.0.1 ::1]
	I0414 13:50:58.432187 1232896 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 13:50:58.432364 1232896 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-734713 localhost] and IPs [192.168.72.152 127.0.0.1 ::1]
	I0414 13:50:58.432465 1232896 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 13:50:58.432542 1232896 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 13:50:58.432605 1232896 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 13:50:58.432690 1232896 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:50:58.432825 1232896 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:50:58.432927 1232896 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 13:50:58.433007 1232896 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:50:58.433096 1232896 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:50:58.433167 1232896 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:50:58.433305 1232896 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:50:58.433402 1232896 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:50:58.435843 1232896 out.go:235]   - Booting up control plane ...
	I0414 13:50:58.435973 1232896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:50:58.436178 1232896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:50:58.436292 1232896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:50:58.436424 1232896 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:50:58.436547 1232896 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:50:58.436611 1232896 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:50:58.436782 1232896 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 13:50:58.436910 1232896 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 13:50:58.437037 1232896 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.700673ms
	I0414 13:50:58.437164 1232896 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 13:50:58.437273 1232896 kubeadm.go:310] [api-check] The API server is healthy after 5.503941955s
	I0414 13:50:58.437430 1232896 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 13:50:58.437619 1232896 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 13:50:58.437710 1232896 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 13:50:58.437991 1232896 kubeadm.go:310] [mark-control-plane] Marking the node flannel-734713 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 13:50:58.438083 1232896 kubeadm.go:310] [bootstrap-token] Using token: 6zgao3.i3fwzwxvba12lxq2
	I0414 13:50:58.440456 1232896 out.go:235]   - Configuring RBAC rules ...
	I0414 13:50:58.440634 1232896 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 13:50:58.440821 1232896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 13:50:58.440956 1232896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 13:50:58.441103 1232896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 13:50:58.441240 1232896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 13:50:58.441334 1232896 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 13:50:58.441474 1232896 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 13:50:58.441518 1232896 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 13:50:58.441604 1232896 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 13:50:58.441692 1232896 kubeadm.go:310] 
	I0414 13:50:58.441832 1232896 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 13:50:58.441843 1232896 kubeadm.go:310] 
	I0414 13:50:58.441954 1232896 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 13:50:58.441965 1232896 kubeadm.go:310] 
	I0414 13:50:58.442011 1232896 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 13:50:58.442121 1232896 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 13:50:58.442202 1232896 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 13:50:58.442214 1232896 kubeadm.go:310] 
	I0414 13:50:58.442300 1232896 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 13:50:58.442311 1232896 kubeadm.go:310] 
	I0414 13:50:58.442366 1232896 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 13:50:58.442378 1232896 kubeadm.go:310] 
	I0414 13:50:58.442448 1232896 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 13:50:58.442549 1232896 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 13:50:58.442644 1232896 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 13:50:58.442652 1232896 kubeadm.go:310] 
	I0414 13:50:58.442759 1232896 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 13:50:58.442858 1232896 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 13:50:58.442866 1232896 kubeadm.go:310] 
	I0414 13:50:58.442978 1232896 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6zgao3.i3fwzwxvba12lxq2 \
	I0414 13:50:58.443136 1232896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5f2612fbe124e36b509339bfdb73251340c2c4faa099e85bd5300a27bddf90e9 \
	I0414 13:50:58.443184 1232896 kubeadm.go:310] 	--control-plane 
	I0414 13:50:58.443200 1232896 kubeadm.go:310] 
	I0414 13:50:58.443310 1232896 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 13:50:58.443319 1232896 kubeadm.go:310] 
	I0414 13:50:58.443447 1232896 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6zgao3.i3fwzwxvba12lxq2 \
	I0414 13:50:58.443624 1232896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5f2612fbe124e36b509339bfdb73251340c2c4faa099e85bd5300a27bddf90e9 
	I0414 13:50:58.443646 1232896 cni.go:84] Creating CNI manager for "flannel"
	I0414 13:50:58.446085 1232896 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0414 13:50:58.447927 1232896 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 13:50:58.453424 1232896 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 13:50:58.453454 1232896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0414 13:50:58.481635 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 13:50:59.015577 1232896 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 13:50:59.015694 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:50:59.015723 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-734713 minikube.k8s.io/updated_at=2025_04_14T13_50_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=9d10b41d083ee2064b1b8c7e16503e13b1847696 minikube.k8s.io/name=flannel-734713 minikube.k8s.io/primary=true
	I0414 13:50:59.057626 1232896 ops.go:34] apiserver oom_adj: -16
	I0414 13:50:59.045217 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:50:59.045923 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find current IP address of domain bridge-734713 in network mk-bridge-734713
	I0414 13:50:59.045959 1234466 main.go:141] libmachine: (bridge-734713) DBG | I0414 13:50:59.045874 1234612 retry.go:31] will retry after 4.020907731s: waiting for domain to come up
	I0414 13:50:59.230240 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:50:59.731059 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:00.231185 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:00.730623 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:01.230917 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:01.730415 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:02.230902 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:02.731378 1232896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:02.821756 1232896 kubeadm.go:1113] duration metric: took 3.806160411s to wait for elevateKubeSystemPrivileges
	I0414 13:51:02.821794 1232896 kubeadm.go:394] duration metric: took 15.906125614s to StartCluster
	I0414 13:51:02.821815 1232896 settings.go:142] acquiring lock: {Name:mkc68e13b098b3e7461fc88804a0aed191118bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:02.821906 1232896 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:51:02.822887 1232896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/kubeconfig: {Name:mk5eb6c4765d4c70f1db00acbce88c0952cb579b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:02.823206 1232896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 13:51:02.823219 1232896 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 13:51:02.823292 1232896 addons.go:69] Setting storage-provisioner=true in profile "flannel-734713"
	I0414 13:51:02.823196 1232896 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:51:02.823331 1232896 addons.go:238] Setting addon storage-provisioner=true in "flannel-734713"
	I0414 13:51:02.823338 1232896 addons.go:69] Setting default-storageclass=true in profile "flannel-734713"
	I0414 13:51:02.823360 1232896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-734713"
	I0414 13:51:02.823365 1232896 host.go:66] Checking if "flannel-734713" exists ...
	I0414 13:51:02.823434 1232896 config.go:182] Loaded profile config "flannel-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:51:02.823847 1232896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:02.823889 1232896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:02.823921 1232896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:02.823893 1232896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:02.826819 1232896 out.go:177] * Verifying Kubernetes components...
	I0414 13:51:02.828426 1232896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:51:02.843993 1232896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0414 13:51:02.844013 1232896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0414 13:51:02.844630 1232896 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:02.844692 1232896 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:02.845173 1232896 main.go:141] libmachine: Using API Version  1
	I0414 13:51:02.845195 1232896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:02.845358 1232896 main.go:141] libmachine: Using API Version  1
	I0414 13:51:02.845384 1232896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:02.845619 1232896 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:02.845825 1232896 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:02.846069 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetState
	I0414 13:51:02.846195 1232896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:02.846234 1232896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:02.850656 1232896 addons.go:238] Setting addon default-storageclass=true in "flannel-734713"
	I0414 13:51:02.850733 1232896 host.go:66] Checking if "flannel-734713" exists ...
	I0414 13:51:02.851157 1232896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:02.851218 1232896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:02.865404 1232896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0414 13:51:02.865992 1232896 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:02.866625 1232896 main.go:141] libmachine: Using API Version  1
	I0414 13:51:02.866660 1232896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:02.867208 1232896 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:02.867449 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetState
	I0414 13:51:02.870195 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:51:02.870821 1232896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0414 13:51:02.871400 1232896 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:02.871938 1232896 main.go:141] libmachine: Using API Version  1
	I0414 13:51:02.871970 1232896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:02.872415 1232896 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:02.872930 1232896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:02.872994 1232896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:02.873287 1232896 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:51:02.875817 1232896 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:51:02.875844 1232896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 13:51:02.875872 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:51:02.880791 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:51:02.881517 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:51:02.881554 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:51:02.881753 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:51:02.881986 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:51:02.882194 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:51:02.882378 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:51:02.890324 1232896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42165
	I0414 13:51:02.890846 1232896 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:02.891374 1232896 main.go:141] libmachine: Using API Version  1
	I0414 13:51:02.891395 1232896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:02.891861 1232896 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:02.892046 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetState
	I0414 13:51:02.894013 1232896 main.go:141] libmachine: (flannel-734713) Calling .DriverName
	I0414 13:51:02.894281 1232896 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 13:51:02.894305 1232896 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 13:51:02.894327 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHHostname
	I0414 13:51:02.897914 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:51:02.898527 1232896 main.go:141] libmachine: (flannel-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:9a:48", ip: ""} in network mk-flannel-734713: {Iface:virbr4 ExpiryTime:2025-04-14 14:50:30 +0000 UTC Type:0 Mac:52:54:00:9e:9a:48 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:flannel-734713 Clientid:01:52:54:00:9e:9a:48}
	I0414 13:51:02.898572 1232896 main.go:141] libmachine: (flannel-734713) DBG | domain flannel-734713 has defined IP address 192.168.72.152 and MAC address 52:54:00:9e:9a:48 in network mk-flannel-734713
	I0414 13:51:02.898778 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHPort
	I0414 13:51:02.899040 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHKeyPath
	I0414 13:51:02.899308 1232896 main.go:141] libmachine: (flannel-734713) Calling .GetSSHUsername
	I0414 13:51:02.899484 1232896 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/flannel-734713/id_rsa Username:docker}
	I0414 13:51:03.017123 1232896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 13:51:03.075403 1232896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:51:03.249352 1232896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:51:03.256733 1232896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 13:51:03.844367 1232896 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0414 13:51:03.845459 1232896 node_ready.go:35] waiting up to 15m0s for node "flannel-734713" to be "Ready" ...
	I0414 13:51:04.249421 1232896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00000851s)
	I0414 13:51:04.249487 1232896 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:04.249494 1232896 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:04.249521 1232896 main.go:141] libmachine: (flannel-734713) Calling .Close
	I0414 13:51:04.249504 1232896 main.go:141] libmachine: (flannel-734713) Calling .Close
	I0414 13:51:04.249875 1232896 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:04.249892 1232896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:04.249901 1232896 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:04.249908 1232896 main.go:141] libmachine: (flannel-734713) Calling .Close
	I0414 13:51:04.249920 1232896 main.go:141] libmachine: (flannel-734713) DBG | Closing plugin on server side
	I0414 13:51:04.249960 1232896 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:04.249967 1232896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:04.249974 1232896 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:04.249980 1232896 main.go:141] libmachine: (flannel-734713) Calling .Close
	I0414 13:51:04.250362 1232896 main.go:141] libmachine: (flannel-734713) DBG | Closing plugin on server side
	I0414 13:51:04.250403 1232896 main.go:141] libmachine: (flannel-734713) DBG | Closing plugin on server side
	I0414 13:51:04.250443 1232896 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:04.250451 1232896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:04.250528 1232896 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:04.250544 1232896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:04.283799 1232896 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:04.283841 1232896 main.go:141] libmachine: (flannel-734713) Calling .Close
	I0414 13:51:04.284210 1232896 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:04.284235 1232896 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:04.286451 1232896 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 13:51:03.069151 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.069848 1234466 main.go:141] libmachine: (bridge-734713) found domain IP: 192.168.50.72
	I0414 13:51:03.069949 1234466 main.go:141] libmachine: (bridge-734713) reserving static IP address...
	I0414 13:51:03.069973 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has current primary IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.070380 1234466 main.go:141] libmachine: (bridge-734713) DBG | unable to find host DHCP lease matching {name: "bridge-734713", mac: "52:54:00:35:90:d7", ip: "192.168.50.72"} in network mk-bridge-734713
	I0414 13:51:03.192263 1234466 main.go:141] libmachine: (bridge-734713) DBG | Getting to WaitForSSH function...
	I0414 13:51:03.192303 1234466 main.go:141] libmachine: (bridge-734713) reserved static IP address 192.168.50.72 for domain bridge-734713
	I0414 13:51:03.192318 1234466 main.go:141] libmachine: (bridge-734713) waiting for SSH...
	I0414 13:51:03.196253 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.197270 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.197346 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.197591 1234466 main.go:141] libmachine: (bridge-734713) DBG | Using SSH client type: external
	I0414 13:51:03.197621 1234466 main.go:141] libmachine: (bridge-734713) DBG | Using SSH private key: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa (-rw-------)
	I0414 13:51:03.197741 1234466 main.go:141] libmachine: (bridge-734713) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 13:51:03.197769 1234466 main.go:141] libmachine: (bridge-734713) DBG | About to run SSH command:
	I0414 13:51:03.197783 1234466 main.go:141] libmachine: (bridge-734713) DBG | exit 0
	I0414 13:51:03.329416 1234466 main.go:141] libmachine: (bridge-734713) DBG | SSH cmd err, output: <nil>: 
	I0414 13:51:03.329855 1234466 main.go:141] libmachine: (bridge-734713) KVM machine creation complete
	I0414 13:51:03.330318 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetConfigRaw
	I0414 13:51:03.331187 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:03.331540 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:03.332028 1234466 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 13:51:03.332056 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetState
	I0414 13:51:03.334536 1234466 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 13:51:03.334563 1234466 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 13:51:03.334570 1234466 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 13:51:03.334579 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:03.339440 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.339955 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.340015 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.340230 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:03.340549 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.340838 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.341016 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:03.341345 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:03.341659 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:03.341675 1234466 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 13:51:03.459720 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:51:03.459749 1234466 main.go:141] libmachine: Detecting the provisioner...
	I0414 13:51:03.459757 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:03.463722 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.466185 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:03.464540 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.466262 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.467868 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.468269 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.468605 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:03.468920 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:03.469331 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:03.469364 1234466 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 13:51:03.588935 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 13:51:03.589031 1234466 main.go:141] libmachine: found compatible host: buildroot
	I0414 13:51:03.589038 1234466 main.go:141] libmachine: Provisioning with buildroot...
	I0414 13:51:03.589047 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetMachineName
	I0414 13:51:03.589391 1234466 buildroot.go:166] provisioning hostname "bridge-734713"
	I0414 13:51:03.589424 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetMachineName
	I0414 13:51:03.589671 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:03.592944 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.593484 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.593514 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.593785 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:03.594064 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.594233 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.594405 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:03.594611 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:03.594897 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:03.594923 1234466 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-734713 && echo "bridge-734713" | sudo tee /etc/hostname
	I0414 13:51:03.732323 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-734713
	
	I0414 13:51:03.732364 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:03.735571 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.736034 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.736078 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.736344 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:03.736594 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.736852 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:03.737040 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:03.737286 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:03.737645 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:03.737670 1234466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-734713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-734713/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-734713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 13:51:03.861255 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 13:51:03.861290 1234466 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20384-1167927/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-1167927/.minikube}
	I0414 13:51:03.861322 1234466 buildroot.go:174] setting up certificates
	I0414 13:51:03.861338 1234466 provision.go:84] configureAuth start
	I0414 13:51:03.861355 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetMachineName
	I0414 13:51:03.861685 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetIP
	I0414 13:51:03.865590 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.866025 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.866062 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.866348 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:03.869442 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.869898 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:03.869926 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:03.870121 1234466 provision.go:143] copyHostCerts
	I0414 13:51:03.870185 1234466 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem, removing ...
	I0414 13:51:03.870212 1234466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem
	I0414 13:51:03.870278 1234466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.pem (1078 bytes)
	I0414 13:51:03.870420 1234466 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem, removing ...
	I0414 13:51:03.870433 1234466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem
	I0414 13:51:03.870464 1234466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/cert.pem (1123 bytes)
	I0414 13:51:03.870536 1234466 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem, removing ...
	I0414 13:51:03.870547 1234466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem
	I0414 13:51:03.870573 1234466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-1167927/.minikube/key.pem (1675 bytes)
	I0414 13:51:03.870642 1234466 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem org=jenkins.bridge-734713 san=[127.0.0.1 192.168.50.72 bridge-734713 localhost minikube]
	I0414 13:51:04.001290 1234466 provision.go:177] copyRemoteCerts
	I0414 13:51:04.001357 1234466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 13:51:04.001402 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.006612 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.007201 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.007264 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.007527 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.007948 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.008399 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.008658 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:04.110676 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 13:51:04.146328 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 13:51:04.183404 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0414 13:51:04.218879 1234466 provision.go:87] duration metric: took 357.5208ms to configureAuth
	I0414 13:51:04.218928 1234466 buildroot.go:189] setting minikube options for container-runtime
	I0414 13:51:04.219185 1234466 config.go:182] Loaded profile config "bridge-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:51:04.219366 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.222985 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.223614 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.223759 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.223897 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.224211 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.224507 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.224716 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.224950 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:04.225304 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:04.225326 1234466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 13:51:04.471711 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 13:51:04.471745 1234466 main.go:141] libmachine: Checking connection to Docker...
	I0414 13:51:04.471752 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetURL
	I0414 13:51:04.473082 1234466 main.go:141] libmachine: (bridge-734713) DBG | using libvirt version 6000000
	I0414 13:51:04.475648 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.476214 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.476311 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.476447 1234466 main.go:141] libmachine: Docker is up and running!
	I0414 13:51:04.476469 1234466 main.go:141] libmachine: Reticulating splines...
	I0414 13:51:04.476479 1234466 client.go:171] duration metric: took 25.134294563s to LocalClient.Create
	I0414 13:51:04.476516 1234466 start.go:167] duration metric: took 25.134391195s to libmachine.API.Create "bridge-734713"
	I0414 13:51:04.476531 1234466 start.go:293] postStartSetup for "bridge-734713" (driver="kvm2")
	I0414 13:51:04.476547 1234466 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 13:51:04.476577 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:04.476946 1234466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 13:51:04.477019 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.481423 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.481812 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.481848 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.482092 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.482350 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.482602 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.482761 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:04.575131 1234466 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 13:51:04.581996 1234466 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 13:51:04.582039 1234466 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/addons for local assets ...
	I0414 13:51:04.582111 1234466 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-1167927/.minikube/files for local assets ...
	I0414 13:51:04.582198 1234466 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem -> 11757462.pem in /etc/ssl/certs
	I0414 13:51:04.582349 1234466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 13:51:04.597737 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:51:04.630897 1234466 start.go:296] duration metric: took 154.348306ms for postStartSetup
	I0414 13:51:04.630977 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetConfigRaw
	I0414 13:51:04.631758 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetIP
	I0414 13:51:04.635561 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.636203 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.636239 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.636657 1234466 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/config.json ...
	I0414 13:51:04.636995 1234466 start.go:128] duration metric: took 25.323511417s to createHost
	I0414 13:51:04.637034 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.641757 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.642490 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.642541 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.643058 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.643476 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.643753 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.643948 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.644212 1234466 main.go:141] libmachine: Using SSH client type: native
	I0414 13:51:04.644436 1234466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0414 13:51:04.644450 1234466 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 13:51:04.756989 1234466 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744638664.694741335
	
	I0414 13:51:04.757017 1234466 fix.go:216] guest clock: 1744638664.694741335
	I0414 13:51:04.757025 1234466 fix.go:229] Guest: 2025-04-14 13:51:04.694741335 +0000 UTC Remote: 2025-04-14 13:51:04.637015139 +0000 UTC m=+33.216809105 (delta=57.726196ms)
	I0414 13:51:04.757056 1234466 fix.go:200] guest clock delta is within tolerance: 57.726196ms
	I0414 13:51:04.757063 1234466 start.go:83] releasing machines lock for "bridge-734713", held for 25.443823589s
	I0414 13:51:04.757088 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:04.757537 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetIP
	I0414 13:51:04.760882 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.761347 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.761397 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.761589 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:04.762531 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:04.762812 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:04.762952 1234466 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 13:51:04.763022 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.763209 1234466 ssh_runner.go:195] Run: cat /version.json
	I0414 13:51:04.763244 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:04.767014 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.767444 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.767702 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.767746 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.767912 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.768091 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:04.768141 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:04.768206 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.768336 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:04.768453 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.768561 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:04.768612 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:04.768741 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:04.768921 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:04.848806 1234466 ssh_runner.go:195] Run: systemctl --version
	I0414 13:51:04.875088 1234466 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 13:51:05.047562 1234466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 13:51:05.054732 1234466 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 13:51:05.054818 1234466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 13:51:05.074078 1234466 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 13:51:05.074135 1234466 start.go:495] detecting cgroup driver to use...
	I0414 13:51:05.074213 1234466 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 13:51:05.094791 1234466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 13:51:05.111331 1234466 docker.go:217] disabling cri-docker service (if available) ...
	I0414 13:51:05.111394 1234466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 13:51:05.127960 1234466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 13:51:05.146340 1234466 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 13:51:05.273211 1234466 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 13:51:05.466394 1234466 docker.go:233] disabling docker service ...
	I0414 13:51:05.466486 1234466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 13:51:05.484275 1234466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 13:51:05.501574 1234466 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 13:51:05.634554 1234466 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 13:51:05.775588 1234466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 13:51:05.793697 1234466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 13:51:05.818000 1234466 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 13:51:05.818084 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.831265 1234466 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 13:51:05.831405 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.843451 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.857199 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.868766 1234466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 13:51:05.881637 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.895782 1234466 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.919082 1234466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 13:51:05.931968 1234466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 13:51:05.943972 1234466 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 13:51:05.944074 1234466 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 13:51:05.961516 1234466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 13:51:05.974657 1234466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:51:06.120359 1234466 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 13:51:06.236997 1234466 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 13:51:06.237097 1234466 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 13:51:06.242869 1234466 start.go:563] Will wait 60s for crictl version
	I0414 13:51:06.242997 1234466 ssh_runner.go:195] Run: which crictl
	I0414 13:51:06.248106 1234466 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 13:51:06.289383 1234466 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 13:51:06.289474 1234466 ssh_runner.go:195] Run: crio --version
	I0414 13:51:06.323282 1234466 ssh_runner.go:195] Run: crio --version
	I0414 13:51:06.361881 1234466 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 13:51:06.363811 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetIP
	I0414 13:51:06.367171 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:06.367826 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:06.367890 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:06.368205 1234466 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 13:51:06.373526 1234466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:51:06.389523 1234466 kubeadm.go:883] updating cluster {Name:bridge-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-734713 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 13:51:06.389650 1234466 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 13:51:06.389719 1234466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:51:06.429517 1234466 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 13:51:06.429611 1234466 ssh_runner.go:195] Run: which lz4
	I0414 13:51:06.434072 1234466 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 13:51:06.438894 1234466 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 13:51:06.438941 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 13:51:04.288053 1232896 addons.go:514] duration metric: took 1.464821279s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 13:51:04.349878 1232896 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-734713" context rescaled to 1 replicas
	I0414 13:51:05.848973 1232896 node_ready.go:53] node "flannel-734713" has status "Ready":"False"
	I0414 13:51:07.850481 1232896 node_ready.go:53] node "flannel-734713" has status "Ready":"False"
	I0414 13:51:08.005562 1234466 crio.go:462] duration metric: took 1.571586403s to copy over tarball
	I0414 13:51:08.005652 1234466 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 13:51:10.824904 1234466 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.819216537s)
	I0414 13:51:10.824951 1234466 crio.go:469] duration metric: took 2.819355986s to extract the tarball
	I0414 13:51:10.824963 1234466 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 13:51:10.869465 1234466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 13:51:10.915475 1234466 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 13:51:10.915505 1234466 cache_images.go:84] Images are preloaded, skipping loading
	I0414 13:51:10.915514 1234466 kubeadm.go:934] updating node { 192.168.50.72 8443 v1.32.2 crio true true} ...
	I0414 13:51:10.915648 1234466 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-734713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-734713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0414 13:51:10.915776 1234466 ssh_runner.go:195] Run: crio config
	I0414 13:51:10.965367 1234466 cni.go:84] Creating CNI manager for "bridge"
	I0414 13:51:10.965397 1234466 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 13:51:10.965419 1234466 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.72 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-734713 NodeName:bridge-734713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 13:51:10.965567 1234466 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-734713"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.72"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.72"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 13:51:10.965653 1234466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 13:51:10.976611 1234466 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 13:51:10.976704 1234466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 13:51:10.988488 1234466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0414 13:51:11.008049 1234466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 13:51:11.026412 1234466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0414 13:51:11.046765 1234466 ssh_runner.go:195] Run: grep 192.168.50.72	control-plane.minikube.internal$ /etc/hosts
	I0414 13:51:11.052685 1234466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 13:51:11.068821 1234466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:51:11.199728 1234466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:51:11.222437 1234466 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713 for IP: 192.168.50.72
	I0414 13:51:11.222461 1234466 certs.go:194] generating shared ca certs ...
	I0414 13:51:11.222480 1234466 certs.go:226] acquiring lock for ca certs: {Name:mkf843fc5ec8174e0b98797f754d5d9eb6327b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:11.222636 1234466 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key
	I0414 13:51:11.222673 1234466 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key
	I0414 13:51:11.222680 1234466 certs.go:256] generating profile certs ...
	I0414 13:51:11.222732 1234466 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.key
	I0414 13:51:11.222747 1234466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt with IP's: []
	I0414 13:51:11.365988 1234466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt ...
	I0414 13:51:11.366026 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.crt: {Name:mk1c3cc5be5c7be288ffe1c32f0a1821e7236131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:11.450947 1234466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.key ...
	I0414 13:51:11.451016 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/client.key: {Name:mk689ac422a768ba2f3657cd71c037393bb8d2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:11.451202 1234466 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key.61a2bbaa
	I0414 13:51:11.451242 1234466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt.61a2bbaa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.72]
	I0414 13:51:11.497666 1234466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt.61a2bbaa ...
	I0414 13:51:11.497708 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt.61a2bbaa: {Name:mk4bdd975f2523e3521ea8be6415827ba4579231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:11.612862 1234466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key.61a2bbaa ...
	I0414 13:51:11.612902 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key.61a2bbaa: {Name:mkef710d1e4f54f4806f158371103d3edd21f34c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:11.613047 1234466 certs.go:381] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt.61a2bbaa -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt
	I0414 13:51:11.613163 1234466 certs.go:385] copying /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key.61a2bbaa -> /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key
	I0414 13:51:11.613305 1234466 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.key
	I0414 13:51:11.613336 1234466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.crt with IP's: []
	I0414 13:51:12.411383 1234466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.crt ...
	I0414 13:51:12.411424 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.crt: {Name:mk79b09a5bdb6725dd71016772cd31da197161cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:12.411622 1234466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.key ...
	I0414 13:51:12.411637 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.key: {Name:mk560c1ea1a143fd965d6012faf57eea3e2d6f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:12.411846 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem (1338 bytes)
	W0414 13:51:12.411886 1234466 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746_empty.pem, impossibly tiny 0 bytes
	I0414 13:51:12.411893 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 13:51:12.411915 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/ca.pem (1078 bytes)
	I0414 13:51:12.411935 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/cert.pem (1123 bytes)
	I0414 13:51:12.411959 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/key.pem (1675 bytes)
	I0414 13:51:12.411997 1234466 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem (1708 bytes)
	I0414 13:51:12.412576 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 13:51:12.444925 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0414 13:51:12.474743 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 13:51:12.504980 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 13:51:12.536499 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 13:51:12.563689 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 13:51:12.590634 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 13:51:12.635261 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/bridge-734713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 13:51:12.666123 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 13:51:12.694495 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/certs/1175746.pem --> /usr/share/ca-certificates/1175746.pem (1338 bytes)
	I0414 13:51:12.728746 1234466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/ssl/certs/11757462.pem --> /usr/share/ca-certificates/11757462.pem (1708 bytes)
	I0414 13:51:12.758377 1234466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 13:51:12.779305 1234466 ssh_runner.go:195] Run: openssl version
	I0414 13:51:12.786047 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11757462.pem && ln -fs /usr/share/ca-certificates/11757462.pem /etc/ssl/certs/11757462.pem"
	I0414 13:51:12.798761 1234466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11757462.pem
	I0414 13:51:12.804619 1234466 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:27 /usr/share/ca-certificates/11757462.pem
	I0414 13:51:12.804735 1234466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11757462.pem
	I0414 13:51:12.812071 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11757462.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 13:51:12.827337 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 13:51:12.841867 1234466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:51:12.847184 1234466 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:51:12.847274 1234466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 13:51:12.856080 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 13:51:12.876470 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1175746.pem && ln -fs /usr/share/ca-certificates/1175746.pem /etc/ssl/certs/1175746.pem"
	I0414 13:51:12.892156 1234466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1175746.pem
	I0414 13:51:12.897657 1234466 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:27 /usr/share/ca-certificates/1175746.pem
	I0414 13:51:12.897744 1234466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1175746.pem
	I0414 13:51:12.904810 1234466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1175746.pem /etc/ssl/certs/51391683.0"
	I0414 13:51:12.917789 1234466 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 13:51:12.923108 1234466 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 13:51:12.923173 1234466 kubeadm.go:392] StartCluster: {Name:bridge-734713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-734713 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 13:51:12.923248 1234466 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 13:51:12.923303 1234466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 13:51:12.962335 1234466 cri.go:89] found id: ""
	I0414 13:51:12.962418 1234466 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 13:51:12.973991 1234466 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 13:51:12.985345 1234466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 13:51:12.995695 1234466 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 13:51:12.995720 1234466 kubeadm.go:157] found existing configuration files:
	
	I0414 13:51:12.995783 1234466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 13:51:13.005622 1234466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 13:51:13.005697 1234466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 13:51:13.016784 1234466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 13:51:13.027489 1234466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 13:51:13.027586 1234466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 13:51:13.039289 1234466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 13:51:13.051866 1234466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 13:51:13.051980 1234466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 13:51:13.063819 1234466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 13:51:13.075853 1234466 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 13:51:13.075919 1234466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 13:51:13.087095 1234466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 13:51:13.154406 1234466 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 13:51:13.154587 1234466 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 13:51:13.278999 1234466 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 13:51:13.279188 1234466 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 13:51:13.279334 1234466 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 13:51:13.289166 1234466 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 13:51:10.349310 1232896 node_ready.go:53] node "flannel-734713" has status "Ready":"False"
	I0414 13:51:11.892213 1232896 node_ready.go:49] node "flannel-734713" has status "Ready":"True"
	I0414 13:51:11.892247 1232896 node_ready.go:38] duration metric: took 8.046750945s for node "flannel-734713" to be "Ready" ...
	I0414 13:51:11.892257 1232896 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:51:13.051069 1232896 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-8492w" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:13.335073 1234466 out.go:235]   - Generating certificates and keys ...
	I0414 13:51:13.335217 1234466 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 13:51:13.335366 1234466 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 13:51:13.992852 1234466 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 13:51:14.120934 1234466 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 13:51:14.598211 1234466 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 13:51:15.150015 1234466 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 13:51:15.222014 1234466 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 13:51:15.222395 1234466 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-734713 localhost] and IPs [192.168.50.72 127.0.0.1 ::1]
	I0414 13:51:15.329052 1234466 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 13:51:15.329421 1234466 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-734713 localhost] and IPs [192.168.50.72 127.0.0.1 ::1]
	I0414 13:51:15.545238 1234466 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 13:51:15.704672 1234466 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 13:51:15.786513 1234466 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 13:51:15.786626 1234466 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 13:51:15.897209 1234466 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 13:51:16.070403 1234466 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 13:51:16.151058 1234466 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 13:51:16.220858 1234466 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 13:51:16.431256 1234466 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 13:51:16.431884 1234466 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 13:51:16.434284 1234466 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 13:51:16.437374 1234466 out.go:235]   - Booting up control plane ...
	I0414 13:51:16.437488 1234466 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 13:51:16.437576 1234466 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 13:51:16.437664 1234466 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 13:51:16.456316 1234466 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 13:51:16.463413 1234466 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 13:51:16.463494 1234466 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 13:51:15.064385 1232896 pod_ready.go:103] pod "coredns-668d6bf9bc-8492w" in "kube-system" namespace has status "Ready":"False"
	I0414 13:51:16.558699 1232896 pod_ready.go:93] pod "coredns-668d6bf9bc-8492w" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:16.558738 1232896 pod_ready.go:82] duration metric: took 3.50762195s for pod "coredns-668d6bf9bc-8492w" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.558755 1232896 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.564828 1232896 pod_ready.go:93] pod "etcd-flannel-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:16.564874 1232896 pod_ready.go:82] duration metric: took 6.109539ms for pod "etcd-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.564894 1232896 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.570966 1232896 pod_ready.go:93] pod "kube-apiserver-flannel-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:16.571007 1232896 pod_ready.go:82] duration metric: took 6.102875ms for pod "kube-apiserver-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.571025 1232896 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.576522 1232896 pod_ready.go:93] pod "kube-controller-manager-flannel-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:16.576553 1232896 pod_ready.go:82] duration metric: took 5.519953ms for pod "kube-controller-manager-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.576565 1232896 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5s8qf" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.633709 1232896 pod_ready.go:93] pod "kube-proxy-5s8qf" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:16.633746 1232896 pod_ready.go:82] duration metric: took 57.173138ms for pod "kube-proxy-5s8qf" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:16.633760 1232896 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:17.032077 1232896 pod_ready.go:93] pod "kube-scheduler-flannel-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:17.032107 1232896 pod_ready.go:82] duration metric: took 398.339484ms for pod "kube-scheduler-flannel-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:17.032119 1232896 pod_ready.go:39] duration metric: took 5.139837902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:51:17.032142 1232896 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:51:17.032202 1232896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:51:17.051024 1232896 api_server.go:72] duration metric: took 14.227638835s to wait for apiserver process to appear ...
	I0414 13:51:17.051068 1232896 api_server.go:88] waiting for apiserver healthz status ...
	I0414 13:51:17.051101 1232896 api_server.go:253] Checking apiserver healthz at https://192.168.72.152:8443/healthz ...
	I0414 13:51:17.056849 1232896 api_server.go:279] https://192.168.72.152:8443/healthz returned 200:
	ok
	I0414 13:51:17.058306 1232896 api_server.go:141] control plane version: v1.32.2
	I0414 13:51:17.058339 1232896 api_server.go:131] duration metric: took 7.261984ms to wait for apiserver health ...
	I0414 13:51:17.058352 1232896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 13:51:17.233626 1232896 system_pods.go:59] 7 kube-system pods found
	I0414 13:51:17.233668 1232896 system_pods.go:61] "coredns-668d6bf9bc-8492w" [2b8a316e-51f4-421a-92a6-3f073f3aa973] Running
	I0414 13:51:17.233675 1232896 system_pods.go:61] "etcd-flannel-734713" [4ac0fa30-64f5-4a90-bf1e-ddff2f35b0b0] Running
	I0414 13:51:17.233681 1232896 system_pods.go:61] "kube-apiserver-flannel-734713" [d5144931-50e7-4e41-83e4-1e1fbe41f37a] Running
	I0414 13:51:17.233689 1232896 system_pods.go:61] "kube-controller-manager-flannel-734713" [905b4b89-7621-4191-bf5c-96cc140ca066] Running
	I0414 13:51:17.233694 1232896 system_pods.go:61] "kube-proxy-5s8qf" [a9d515b7-10fe-4b11-81a5-194f1db2490a] Running
	I0414 13:51:17.233700 1232896 system_pods.go:61] "kube-scheduler-flannel-734713" [8a6a7583-a85b-4bf2-a0c1-440b86a31937] Running
	I0414 13:51:17.233704 1232896 system_pods.go:61] "storage-provisioner" [dd623537-993a-42ab-a151-492270dce1e4] Running
	I0414 13:51:17.233712 1232896 system_pods.go:74] duration metric: took 175.353961ms to wait for pod list to return data ...
	I0414 13:51:17.233725 1232896 default_sa.go:34] waiting for default service account to be created ...
	I0414 13:51:17.433042 1232896 default_sa.go:45] found service account: "default"
	I0414 13:51:17.433073 1232896 default_sa.go:55] duration metric: took 199.341251ms for default service account to be created ...
	I0414 13:51:17.433084 1232896 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 13:51:17.632131 1232896 system_pods.go:86] 7 kube-system pods found
	I0414 13:51:17.632172 1232896 system_pods.go:89] "coredns-668d6bf9bc-8492w" [2b8a316e-51f4-421a-92a6-3f073f3aa973] Running
	I0414 13:51:17.632178 1232896 system_pods.go:89] "etcd-flannel-734713" [4ac0fa30-64f5-4a90-bf1e-ddff2f35b0b0] Running
	I0414 13:51:17.632182 1232896 system_pods.go:89] "kube-apiserver-flannel-734713" [d5144931-50e7-4e41-83e4-1e1fbe41f37a] Running
	I0414 13:51:17.632186 1232896 system_pods.go:89] "kube-controller-manager-flannel-734713" [905b4b89-7621-4191-bf5c-96cc140ca066] Running
	I0414 13:51:17.632189 1232896 system_pods.go:89] "kube-proxy-5s8qf" [a9d515b7-10fe-4b11-81a5-194f1db2490a] Running
	I0414 13:51:17.632193 1232896 system_pods.go:89] "kube-scheduler-flannel-734713" [8a6a7583-a85b-4bf2-a0c1-440b86a31937] Running
	I0414 13:51:17.632196 1232896 system_pods.go:89] "storage-provisioner" [dd623537-993a-42ab-a151-492270dce1e4] Running
	I0414 13:51:17.632202 1232896 system_pods.go:126] duration metric: took 199.11238ms to wait for k8s-apps to be running ...
	I0414 13:51:17.632211 1232896 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 13:51:17.632261 1232896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:51:17.650071 1232896 system_svc.go:56] duration metric: took 17.845536ms WaitForService to wait for kubelet
	I0414 13:51:17.650111 1232896 kubeadm.go:582] duration metric: took 14.826762399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:51:17.650133 1232896 node_conditions.go:102] verifying NodePressure condition ...
	I0414 13:51:17.831879 1232896 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 13:51:17.831924 1232896 node_conditions.go:123] node cpu capacity is 2
	I0414 13:51:17.831957 1232896 node_conditions.go:105] duration metric: took 181.817034ms to run NodePressure ...
	I0414 13:51:17.831976 1232896 start.go:241] waiting for startup goroutines ...
	I0414 13:51:17.831987 1232896 start.go:246] waiting for cluster config update ...
	I0414 13:51:17.832003 1232896 start.go:255] writing updated cluster config ...
	I0414 13:51:17.832321 1232896 ssh_runner.go:195] Run: rm -f paused
	I0414 13:51:17.892758 1232896 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 13:51:17.895371 1232896 out.go:177] * Done! kubectl is now configured to use "flannel-734713" cluster and "default" namespace by default
	I0414 13:51:16.600377 1234466 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 13:51:16.600569 1234466 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 13:51:17.601297 1234466 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001158671s
	I0414 13:51:17.601416 1234466 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 13:51:23.104197 1234466 kubeadm.go:310] [api-check] The API server is healthy after 5.502711209s
	I0414 13:51:23.119451 1234466 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 13:51:23.135571 1234466 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 13:51:23.172527 1234466 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 13:51:23.172760 1234466 kubeadm.go:310] [mark-control-plane] Marking the node bridge-734713 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 13:51:23.186500 1234466 kubeadm.go:310] [bootstrap-token] Using token: yq0cb1.eusflw7rmhjh0znj
	I0414 13:51:23.188519 1234466 out.go:235]   - Configuring RBAC rules ...
	I0414 13:51:23.188689 1234466 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 13:51:23.203763 1234466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 13:51:23.215702 1234466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 13:51:23.222090 1234466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 13:51:23.227812 1234466 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 13:51:23.233964 1234466 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 13:51:23.511520 1234466 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 13:51:23.988516 1234466 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 13:51:24.512335 1234466 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 13:51:24.513730 1234466 kubeadm.go:310] 
	I0414 13:51:24.513834 1234466 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 13:51:24.513855 1234466 kubeadm.go:310] 
	I0414 13:51:24.513951 1234466 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 13:51:24.513960 1234466 kubeadm.go:310] 
	I0414 13:51:24.513994 1234466 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 13:51:24.514078 1234466 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 13:51:24.514154 1234466 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 13:51:24.514163 1234466 kubeadm.go:310] 
	I0414 13:51:24.514232 1234466 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 13:51:24.514249 1234466 kubeadm.go:310] 
	I0414 13:51:24.514294 1234466 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 13:51:24.514300 1234466 kubeadm.go:310] 
	I0414 13:51:24.514343 1234466 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 13:51:24.514404 1234466 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 13:51:24.514461 1234466 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 13:51:24.514464 1234466 kubeadm.go:310] 
	I0414 13:51:24.514574 1234466 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 13:51:24.514681 1234466 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 13:51:24.514691 1234466 kubeadm.go:310] 
	I0414 13:51:24.514837 1234466 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yq0cb1.eusflw7rmhjh0znj \
	I0414 13:51:24.514989 1234466 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5f2612fbe124e36b509339bfdb73251340c2c4faa099e85bd5300a27bddf90e9 \
	I0414 13:51:24.515043 1234466 kubeadm.go:310] 	--control-plane 
	I0414 13:51:24.515086 1234466 kubeadm.go:310] 
	I0414 13:51:24.515242 1234466 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 13:51:24.515267 1234466 kubeadm.go:310] 
	I0414 13:51:24.515375 1234466 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yq0cb1.eusflw7rmhjh0znj \
	I0414 13:51:24.515520 1234466 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5f2612fbe124e36b509339bfdb73251340c2c4faa099e85bd5300a27bddf90e9 
	I0414 13:51:24.516591 1234466 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 13:51:24.516632 1234466 cni.go:84] Creating CNI manager for "bridge"
	I0414 13:51:24.518631 1234466 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 13:51:24.520244 1234466 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 13:51:24.530819 1234466 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 13:51:24.555410 1234466 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 13:51:24.555491 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:24.555547 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-734713 minikube.k8s.io/updated_at=2025_04_14T13_51_24_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=9d10b41d083ee2064b1b8c7e16503e13b1847696 minikube.k8s.io/name=bridge-734713 minikube.k8s.io/primary=true
	I0414 13:51:24.585267 1234466 ops.go:34] apiserver oom_adj: -16
	I0414 13:51:24.714624 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:25.215362 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:25.715448 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:26.215573 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:26.715531 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:27.215364 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:27.715418 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:28.215598 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:28.714775 1234466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 13:51:28.806470 1234466 kubeadm.go:1113] duration metric: took 4.251055299s to wait for elevateKubeSystemPrivileges
	I0414 13:51:28.806531 1234466 kubeadm.go:394] duration metric: took 15.883361s to StartCluster
	I0414 13:51:28.806560 1234466 settings.go:142] acquiring lock: {Name:mkc68e13b098b3e7461fc88804a0aed191118bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:28.806675 1234466 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:51:28.807853 1234466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/kubeconfig: {Name:mk5eb6c4765d4c70f1db00acbce88c0952cb579b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 13:51:28.808159 1234466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 13:51:28.808158 1234466 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.72 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 13:51:28.808314 1234466 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 13:51:28.808443 1234466 config.go:182] Loaded profile config "bridge-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:51:28.808461 1234466 addons.go:69] Setting default-storageclass=true in profile "bridge-734713"
	I0414 13:51:28.808490 1234466 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-734713"
	I0414 13:51:28.808443 1234466 addons.go:69] Setting storage-provisioner=true in profile "bridge-734713"
	I0414 13:51:28.808529 1234466 addons.go:238] Setting addon storage-provisioner=true in "bridge-734713"
	I0414 13:51:28.808576 1234466 host.go:66] Checking if "bridge-734713" exists ...
	I0414 13:51:28.809008 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:28.809009 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:28.809061 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:28.809074 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:28.809985 1234466 out.go:177] * Verifying Kubernetes components...
	I0414 13:51:28.811969 1234466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 13:51:28.832482 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I0414 13:51:28.832765 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0414 13:51:28.833086 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:28.833355 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:28.833721 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:51:28.833742 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:28.833886 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:51:28.833920 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:28.834368 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:28.834501 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:28.834747 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetState
	I0414 13:51:28.835480 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:28.835539 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:28.839815 1234466 addons.go:238] Setting addon default-storageclass=true in "bridge-734713"
	I0414 13:51:28.839874 1234466 host.go:66] Checking if "bridge-734713" exists ...
	I0414 13:51:28.840302 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:28.840341 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:28.854113 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I0414 13:51:28.854669 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:28.855231 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:51:28.855258 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:28.855711 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:28.855915 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetState
	I0414 13:51:28.857140 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43321
	I0414 13:51:28.857755 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:28.858280 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:51:28.858310 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:28.858343 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:28.859784 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:28.860317 1234466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:51:28.860336 1234466 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 13:51:28.860346 1234466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:51:28.861814 1234466 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:51:28.861837 1234466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 13:51:28.861863 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:28.866517 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:28.867027 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:28.867057 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:28.867500 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:28.867763 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:28.867955 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:28.868113 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:28.881764 1234466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42741
	I0414 13:51:28.882567 1234466 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:51:28.883247 1234466 main.go:141] libmachine: Using API Version  1
	I0414 13:51:28.883270 1234466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:51:28.883794 1234466 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:51:28.884041 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetState
	I0414 13:51:28.886084 1234466 main.go:141] libmachine: (bridge-734713) Calling .DriverName
	I0414 13:51:28.886384 1234466 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 13:51:28.886403 1234466 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 13:51:28.886428 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHHostname
	I0414 13:51:28.890687 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:28.891208 1234466 main.go:141] libmachine: (bridge-734713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:90:d7", ip: ""} in network mk-bridge-734713: {Iface:virbr2 ExpiryTime:2025-04-14 14:50:56 +0000 UTC Type:0 Mac:52:54:00:35:90:d7 Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:bridge-734713 Clientid:01:52:54:00:35:90:d7}
	I0414 13:51:28.891236 1234466 main.go:141] libmachine: (bridge-734713) DBG | domain bridge-734713 has defined IP address 192.168.50.72 and MAC address 52:54:00:35:90:d7 in network mk-bridge-734713
	I0414 13:51:28.891481 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHPort
	I0414 13:51:28.891753 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHKeyPath
	I0414 13:51:28.891981 1234466 main.go:141] libmachine: (bridge-734713) Calling .GetSSHUsername
	I0414 13:51:28.892209 1234466 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/bridge-734713/id_rsa Username:docker}
	I0414 13:51:29.105890 1234466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 13:51:29.106101 1234466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 13:51:29.225606 1234466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 13:51:29.360706 1234466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 13:51:29.879719 1234466 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0414 13:51:29.879875 1234466 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:29.879899 1234466 main.go:141] libmachine: (bridge-734713) Calling .Close
	I0414 13:51:29.880410 1234466 main.go:141] libmachine: (bridge-734713) DBG | Closing plugin on server side
	I0414 13:51:29.880453 1234466 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:29.880473 1234466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:29.880491 1234466 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:29.880517 1234466 main.go:141] libmachine: (bridge-734713) Calling .Close
	I0414 13:51:29.880816 1234466 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:29.880834 1234466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:29.880820 1234466 main.go:141] libmachine: (bridge-734713) DBG | Closing plugin on server side
	I0414 13:51:29.881040 1234466 node_ready.go:35] waiting up to 15m0s for node "bridge-734713" to be "Ready" ...
	I0414 13:51:29.912475 1234466 node_ready.go:49] node "bridge-734713" has status "Ready":"True"
	I0414 13:51:29.912512 1234466 node_ready.go:38] duration metric: took 31.426837ms for node "bridge-734713" to be "Ready" ...
	I0414 13:51:29.912539 1234466 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:51:29.921277 1234466 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:29.921312 1234466 main.go:141] libmachine: (bridge-734713) Calling .Close
	I0414 13:51:29.921670 1234466 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:29.921690 1234466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:29.930866 1234466 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:30.208651 1234466 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:30.208688 1234466 main.go:141] libmachine: (bridge-734713) Calling .Close
	I0414 13:51:30.209285 1234466 main.go:141] libmachine: (bridge-734713) DBG | Closing plugin on server side
	I0414 13:51:30.209346 1234466 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:30.209375 1234466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:30.209397 1234466 main.go:141] libmachine: Making call to close driver server
	I0414 13:51:30.209408 1234466 main.go:141] libmachine: (bridge-734713) Calling .Close
	I0414 13:51:30.209799 1234466 main.go:141] libmachine: (bridge-734713) DBG | Closing plugin on server side
	I0414 13:51:30.209869 1234466 main.go:141] libmachine: Successfully made call to close driver server
	I0414 13:51:30.209902 1234466 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 13:51:30.212449 1234466 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 13:51:30.214539 1234466 addons.go:514] duration metric: took 1.406225375s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 13:51:30.386708 1234466 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-734713" context rescaled to 1 replicas
	I0414 13:51:31.937908 1234466 pod_ready.go:103] pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status "Ready":"False"
	I0414 13:51:33.938334 1234466 pod_ready.go:103] pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status "Ready":"False"
	I0414 13:51:36.437417 1234466 pod_ready.go:103] pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status "Ready":"False"
	I0414 13:51:38.440165 1234466 pod_ready.go:103] pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status "Ready":"False"
	I0414 13:51:40.938065 1234466 pod_ready.go:98] pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:40 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.72 HostIPs:[{IP:192.168.50.
72}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 13:51:29 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 13:51:30 +0000 UTC,FinishedAt:2025-04-14 13:51:40 +0000 UTC,ContainerID:cri-o://6a7952f59cfef1a40a0dd262985051e5e943a788e6ba50cc4cf0e5f6d9abca2c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://6a7952f59cfef1a40a0dd262985051e5e943a788e6ba50cc4cf0e5f6d9abca2c Started:0xc0023076d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002346b90} {Name:kube-api-access-4n6jv MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002346ba0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 13:51:40.938101 1234466 pod_ready.go:82] duration metric: took 11.007179529s for pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace to be "Ready" ...
	E0414 13:51:40.938117 1234466 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-b56gb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:40 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-14 13:51:29 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.72 HostIPs:[{IP:192.168.50.72}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-14 13:51:29 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-14 13:51:30 +0000 UTC,FinishedAt:2025-04-14 13:51:40 +0000 UTC,ContainerID:cri-o://6a7952f59cfef1a40a0dd262985051e5e943a788e6ba50cc4cf0e5f6d9abca2c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://6a7952f59cfef1a40a0dd262985051e5e943a788e6ba50cc4cf0e5f6d9abca2c Started:0xc0023076d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002346b90} {Name:kube-api-access-4n6jv MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc002346ba0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0414 13:51:40.938141 1234466 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-z92sg" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.946346 1234466 pod_ready.go:93] pod "coredns-668d6bf9bc-z92sg" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:40.946379 1234466 pod_ready.go:82] duration metric: took 8.225626ms for pod "coredns-668d6bf9bc-z92sg" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.946392 1234466 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.955269 1234466 pod_ready.go:93] pod "etcd-bridge-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:40.955296 1234466 pod_ready.go:82] duration metric: took 8.896656ms for pod "etcd-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.955307 1234466 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.961998 1234466 pod_ready.go:93] pod "kube-apiserver-bridge-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:40.962029 1234466 pod_ready.go:82] duration metric: took 6.713066ms for pod "kube-apiserver-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.962042 1234466 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.966175 1234466 pod_ready.go:93] pod "kube-controller-manager-bridge-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:40.966217 1234466 pod_ready.go:82] duration metric: took 4.165309ms for pod "kube-controller-manager-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:40.966236 1234466 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-9pk92" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:41.335640 1234466 pod_ready.go:93] pod "kube-proxy-9pk92" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:41.335691 1234466 pod_ready.go:82] duration metric: took 369.447003ms for pod "kube-proxy-9pk92" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:41.335704 1234466 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:41.735093 1234466 pod_ready.go:93] pod "kube-scheduler-bridge-734713" in "kube-system" namespace has status "Ready":"True"
	I0414 13:51:41.735135 1234466 pod_ready.go:82] duration metric: took 399.422375ms for pod "kube-scheduler-bridge-734713" in "kube-system" namespace to be "Ready" ...
	I0414 13:51:41.735151 1234466 pod_ready.go:39] duration metric: took 11.822593264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 13:51:41.735178 1234466 api_server.go:52] waiting for apiserver process to appear ...
	I0414 13:51:41.735248 1234466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:51:41.751642 1234466 api_server.go:72] duration metric: took 12.943438798s to wait for apiserver process to appear ...
	I0414 13:51:41.751690 1234466 api_server.go:88] waiting for apiserver healthz status ...
	I0414 13:51:41.751718 1234466 api_server.go:253] Checking apiserver healthz at https://192.168.50.72:8443/healthz ...
	I0414 13:51:41.757618 1234466 api_server.go:279] https://192.168.50.72:8443/healthz returned 200:
	ok
	I0414 13:51:41.758822 1234466 api_server.go:141] control plane version: v1.32.2
	I0414 13:51:41.758852 1234466 api_server.go:131] duration metric: took 7.153797ms to wait for apiserver health ...
	I0414 13:51:41.758862 1234466 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 13:51:41.937084 1234466 system_pods.go:59] 7 kube-system pods found
	I0414 13:51:41.937135 1234466 system_pods.go:61] "coredns-668d6bf9bc-z92sg" [42590442-580d-41ab-9efe-2517a068eb17] Running
	I0414 13:51:41.937143 1234466 system_pods.go:61] "etcd-bridge-734713" [e47a695e-3434-4c51-afaf-9246153d30a2] Running
	I0414 13:51:41.937148 1234466 system_pods.go:61] "kube-apiserver-bridge-734713" [5ce46cf1-b1f6-4cc0-9cfb-f1d687fe8025] Running
	I0414 13:51:41.937153 1234466 system_pods.go:61] "kube-controller-manager-bridge-734713" [0291b7b4-84c2-4121-b702-5c297388a045] Running
	I0414 13:51:41.937158 1234466 system_pods.go:61] "kube-proxy-9pk92" [d27ad409-1c19-4af9-8a04-d7e93ac9d8e0] Running
	I0414 13:51:41.937163 1234466 system_pods.go:61] "kube-scheduler-bridge-734713" [8ca465da-2ca0-46da-b8eb-504b5f12118e] Running
	I0414 13:51:41.937167 1234466 system_pods.go:61] "storage-provisioner" [98448f6e-961f-4647-bf0b-33237d9f4833] Running
	I0414 13:51:41.937176 1234466 system_pods.go:74] duration metric: took 178.306954ms to wait for pod list to return data ...
	I0414 13:51:41.937187 1234466 default_sa.go:34] waiting for default service account to be created ...
	I0414 13:51:42.135201 1234466 default_sa.go:45] found service account: "default"
	I0414 13:51:42.135235 1234466 default_sa.go:55] duration metric: took 198.036975ms for default service account to be created ...
	I0414 13:51:42.135249 1234466 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 13:51:42.335420 1234466 system_pods.go:86] 7 kube-system pods found
	I0414 13:51:42.335468 1234466 system_pods.go:89] "coredns-668d6bf9bc-z92sg" [42590442-580d-41ab-9efe-2517a068eb17] Running
	I0414 13:51:42.335477 1234466 system_pods.go:89] "etcd-bridge-734713" [e47a695e-3434-4c51-afaf-9246153d30a2] Running
	I0414 13:51:42.335483 1234466 system_pods.go:89] "kube-apiserver-bridge-734713" [5ce46cf1-b1f6-4cc0-9cfb-f1d687fe8025] Running
	I0414 13:51:42.335489 1234466 system_pods.go:89] "kube-controller-manager-bridge-734713" [0291b7b4-84c2-4121-b702-5c297388a045] Running
	I0414 13:51:42.335496 1234466 system_pods.go:89] "kube-proxy-9pk92" [d27ad409-1c19-4af9-8a04-d7e93ac9d8e0] Running
	I0414 13:51:42.335502 1234466 system_pods.go:89] "kube-scheduler-bridge-734713" [8ca465da-2ca0-46da-b8eb-504b5f12118e] Running
	I0414 13:51:42.335507 1234466 system_pods.go:89] "storage-provisioner" [98448f6e-961f-4647-bf0b-33237d9f4833] Running
	I0414 13:51:42.335518 1234466 system_pods.go:126] duration metric: took 200.259515ms to wait for k8s-apps to be running ...
	I0414 13:51:42.335529 1234466 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 13:51:42.335592 1234466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:51:42.353882 1234466 system_svc.go:56] duration metric: took 18.329065ms WaitForService to wait for kubelet
	I0414 13:51:42.353934 1234466 kubeadm.go:582] duration metric: took 13.545741946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 13:51:42.353954 1234466 node_conditions.go:102] verifying NodePressure condition ...
	I0414 13:51:42.536452 1234466 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 13:51:42.536505 1234466 node_conditions.go:123] node cpu capacity is 2
	I0414 13:51:42.536526 1234466 node_conditions.go:105] duration metric: took 182.566053ms to run NodePressure ...
	I0414 13:51:42.536542 1234466 start.go:241] waiting for startup goroutines ...
	I0414 13:51:42.536552 1234466 start.go:246] waiting for cluster config update ...
	I0414 13:51:42.536568 1234466 start.go:255] writing updated cluster config ...
	I0414 13:51:42.536998 1234466 ssh_runner.go:195] Run: rm -f paused
	I0414 13:51:42.602285 1234466 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 13:51:42.604659 1234466 out.go:177] * Done! kubectl is now configured to use "bridge-734713" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.114202311Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639561114180836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65c8b0a3-081d-4b6c-b7c2-56b8eb2d0949 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.114953742Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91729bc3-d340-4e64-9a7c-2eba627c9b00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.115015751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91729bc3-d340-4e64-9a7c-2eba627c9b00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.115052731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=91729bc3-d340-4e64-9a7c-2eba627c9b00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.152567218Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33c23c9c-a848-4f9e-ac46-782cb01531b6 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.152643524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33c23c9c-a848-4f9e-ac46-782cb01531b6 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.153872023Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=46e3c58e-d579-4a44-b7cf-53b1937fa4a7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.154252045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639561154228014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46e3c58e-d579-4a44-b7cf-53b1937fa4a7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.154897221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6b1e2d9-e060-403d-baa1-73651969cfc9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.154965680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6b1e2d9-e060-403d-baa1-73651969cfc9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.155013453Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f6b1e2d9-e060-403d-baa1-73651969cfc9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.187447756Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e317b99-06ae-4b85-9af1-52858c509436 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.187613215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e317b99-06ae-4b85-9af1-52858c509436 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.189174701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afacbaf1-d7df-411e-9a14-248b7a1eb2c6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.189616123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639561189591204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afacbaf1-d7df-411e-9a14-248b7a1eb2c6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.190521394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d94f248-cc45-49c5-9b9e-ba7028d50e8e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.190574911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d94f248-cc45-49c5-9b9e-ba7028d50e8e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.190611186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9d94f248-cc45-49c5-9b9e-ba7028d50e8e name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.225023065Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=535b8793-a3bd-44ff-aa51-6788f29831b1 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.225100570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=535b8793-a3bd-44ff-aa51-6788f29831b1 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.226929023Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=322b2867-d96f-4813-8ea9-9eb039254b68 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.227349414Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744639561227324096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=322b2867-d96f-4813-8ea9-9eb039254b68 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.228102450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77e12d1b-f67d-40de-b198-1f3f54aa3d81 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.228200295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77e12d1b-f67d-40de-b198-1f3f54aa3d81 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:06:01 old-k8s-version-966509 crio[626]: time="2025-04-14 14:06:01.228239817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=77e12d1b-f67d-40de-b198-1f3f54aa3d81 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 13:42] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052303] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037424] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.184043] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.220918] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.632782] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.648390] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.065849] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072695] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.193720] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.145562] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.255078] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +8.069901] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.072646] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.342677] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +8.500421] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 13:46] systemd-fstab-generator[4889]: Ignoring "noauto" option for root device
	[Apr14 13:48] systemd-fstab-generator[5166]: Ignoring "noauto" option for root device
	[  +0.062342] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:06:01 up 23 min,  0 users,  load average: 0.03, 0.01, 0.02
	Linux old-k8s-version-966509 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]: net.(*Dialer).DialContext(0xc000b5c600, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0007df620, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b5fcc0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0007df620, 0x24, 0x60, 0x7f6811b16378, 0x118, ...)
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]: net/http.(*Transport).dial(0xc0007d0dc0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0007df620, 0x24, 0x0, 0x2ff00000083, 0x20000000202, ...)
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]: net/http.(*Transport).dialConn(0xc0007d0dc0, 0x4f7fe00, 0xc000052030, 0x0, 0xc00021c540, 0x5, 0xc0007df620, 0x24, 0x0, 0xc00055b680, ...)
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]: net/http.(*Transport).dialConnFor(0xc0007d0dc0, 0xc000c72420)
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]: created by net/http.(*Transport).queueForDial
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]: goroutine 173 [select]:
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]: net.(*netFD).connect.func2(0x4f7fe40, 0xc0000e20c0, 0xc000b3cd00, 0xc0006c44e0, 0xc0006c4480)
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]: created by net.(*netFD).connect
	Apr 14 14:06:00 old-k8s-version-966509 kubelet[7023]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Apr 14 14:06:00 old-k8s-version-966509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 178.
	Apr 14 14:06:00 old-k8s-version-966509 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 14 14:06:00 old-k8s-version-966509 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 14 14:06:01 old-k8s-version-966509 kubelet[7058]: I0414 14:06:01.070641    7058 server.go:416] Version: v1.20.0
	Apr 14 14:06:01 old-k8s-version-966509 kubelet[7058]: I0414 14:06:01.071024    7058 server.go:837] Client rotation is on, will bootstrap in background
	Apr 14 14:06:01 old-k8s-version-966509 kubelet[7058]: I0414 14:06:01.073026    7058 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 14 14:06:01 old-k8s-version-966509 kubelet[7058]: W0414 14:06:01.077867    7058 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 14 14:06:01 old-k8s-version-966509 kubelet[7058]: I0414 14:06:01.078577    7058 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-966509 -n old-k8s-version-966509
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 2 (242.087034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-966509" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (368.62s)

                                                
                                    

Test pass (272/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.24
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 13.29
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.32.2/json-events 5.56
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.07
18 TestDownloadOnly/v1.32.2/DeleteAll 13.29
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.67
22 TestOffline 61.51
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 139.38
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 11.57
35 TestAddons/parallel/Registry 19.49
37 TestAddons/parallel/InspektorGadget 11.57
38 TestAddons/parallel/MetricsServer 6.57
40 TestAddons/parallel/CSI 58.11
41 TestAddons/parallel/Headlamp 21.97
42 TestAddons/parallel/CloudSpanner 5.89
43 TestAddons/parallel/LocalPath 56.16
44 TestAddons/parallel/NvidiaDevicePlugin 5.66
45 TestAddons/parallel/Yakd 11.99
47 TestAddons/StoppedEnableDisable 91.4
48 TestCertOptions 52.74
49 TestCertExpiration 307.53
51 TestForceSystemdFlag 86.29
52 TestForceSystemdEnv 69.7
54 TestKVMDriverInstallOrUpdate 4.1
58 TestErrorSpam/setup 43.98
59 TestErrorSpam/start 0.43
60 TestErrorSpam/status 0.86
61 TestErrorSpam/pause 1.76
62 TestErrorSpam/unpause 2.02
63 TestErrorSpam/stop 6.33
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 55.73
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 55.51
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.74
75 TestFunctional/serial/CacheCmd/cache/add_local 2.17
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.97
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 30.05
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.54
86 TestFunctional/serial/LogsFileCmd 1.6
87 TestFunctional/serial/InvalidService 4.34
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 103.89
91 TestFunctional/parallel/DryRun 0.34
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 0.83
97 TestFunctional/parallel/ServiceCmdConnect 43.55
98 TestFunctional/parallel/AddonsCmd 0.19
101 TestFunctional/parallel/SSHCmd 0.52
102 TestFunctional/parallel/CpCmd 1.48
104 TestFunctional/parallel/FileSync 0.22
105 TestFunctional/parallel/CertSync 1.31
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
113 TestFunctional/parallel/License 0.26
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/ServiceCmd/DeployApp 42.2
120 TestFunctional/parallel/ServiceCmd/List 0.45
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
124 TestFunctional/parallel/ProfileCmd/profile_list 0.39
125 TestFunctional/parallel/ServiceCmd/Format 0.35
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
127 TestFunctional/parallel/ServiceCmd/URL 0.34
128 TestFunctional/parallel/MountCmd/any-port 35.73
129 TestFunctional/parallel/MountCmd/specific-port 1.85
130 TestFunctional/parallel/MountCmd/VerifyCleanup 1.21
131 TestFunctional/parallel/Version/short 0.06
132 TestFunctional/parallel/Version/components 0.44
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
137 TestFunctional/parallel/ImageCommands/ImageBuild 3.69
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 206.17
162 TestMultiControlPlane/serial/DeployApp 8.6
163 TestMultiControlPlane/serial/PingHostFromPods 1.38
164 TestMultiControlPlane/serial/AddWorkerNode 61.67
165 TestMultiControlPlane/serial/NodeLabels 0.08
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.95
167 TestMultiControlPlane/serial/CopyFile 14.48
168 TestMultiControlPlane/serial/StopSecondaryNode 91.56
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
170 TestMultiControlPlane/serial/RestartSecondaryNode 49.49
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 449.78
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.7
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
175 TestMultiControlPlane/serial/StopCluster 273.14
176 TestMultiControlPlane/serial/RestartCluster 166.31
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
178 TestMultiControlPlane/serial/AddSecondaryNode 79.81
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.96
183 TestJSONOutput/start/Command 58.9
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.77
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.68
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.41
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.24
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 95.57
215 TestMountStart/serial/StartWithMountFirst 29.55
216 TestMountStart/serial/VerifyMountFirst 0.44
217 TestMountStart/serial/StartWithMountSecond 29.78
218 TestMountStart/serial/VerifyMountSecond 0.44
219 TestMountStart/serial/DeleteFirst 0.96
220 TestMountStart/serial/VerifyMountPostDelete 0.42
221 TestMountStart/serial/Stop 1.34
222 TestMountStart/serial/RestartStopped 23.68
223 TestMountStart/serial/VerifyMountPostStop 0.41
226 TestMultiNode/serial/FreshStart2Nodes 117.73
227 TestMultiNode/serial/DeployApp2Nodes 7.73
228 TestMultiNode/serial/PingHostFrom2Pods 0.9
229 TestMultiNode/serial/AddNode 52.89
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.66
232 TestMultiNode/serial/CopyFile 8.07
233 TestMultiNode/serial/StopNode 2.46
234 TestMultiNode/serial/StartAfterStop 39.14
235 TestMultiNode/serial/RestartKeepsNodes 343.3
236 TestMultiNode/serial/DeleteNode 2.97
237 TestMultiNode/serial/StopMultiNode 182.27
238 TestMultiNode/serial/RestartMultiNode 116.08
239 TestMultiNode/serial/ValidateNameConflict 45.84
246 TestScheduledStopUnix 116.25
250 TestRunningBinaryUpgrade 221.37
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 93.85
257 TestStoppedBinaryUpgrade/Setup 0.36
258 TestStoppedBinaryUpgrade/Upgrade 157.93
259 TestNoKubernetes/serial/StartWithStopK8s 64.82
260 TestNoKubernetes/serial/Start 28.31
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
262 TestNoKubernetes/serial/ProfileList 31.62
263 TestNoKubernetes/serial/Stop 1.33
264 TestNoKubernetes/serial/StartNoArgs 23.07
265 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
281 TestNetworkPlugins/group/false 3.7
286 TestPause/serial/Start 103.14
287 TestPause/serial/SecondStartNoReconfiguration 56.92
290 TestPause/serial/Pause 0.93
291 TestPause/serial/VerifyStatus 0.33
292 TestPause/serial/Unpause 0.86
293 TestPause/serial/PauseAgain 1.05
294 TestPause/serial/DeletePaused 1.24
295 TestPause/serial/VerifyDeletedResources 0.86
297 TestStartStop/group/no-preload/serial/FirstStart 90.12
299 TestStartStop/group/embed-certs/serial/FirstStart 61.54
300 TestStartStop/group/no-preload/serial/DeployApp 12.31
301 TestStartStop/group/embed-certs/serial/DeployApp 11.31
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
303 TestStartStop/group/no-preload/serial/Stop 90.93
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.27
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
307 TestStartStop/group/embed-certs/serial/Stop 91.22
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.38
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
312 TestStartStop/group/no-preload/serial/SecondStart 346.9
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
314 TestStartStop/group/embed-certs/serial/SecondStart 349.37
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
318 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 340.71
319 TestStartStop/group/old-k8s-version/serial/Stop 3.32
320 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
326 TestStartStop/group/no-preload/serial/Pause 3.45
328 TestStartStop/group/newest-cni/serial/FirstStart 55.01
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
331 TestStartStop/group/embed-certs/serial/Pause 3.23
332 TestNetworkPlugins/group/auto/Start 75.8
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.24
334 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.46
336 TestStartStop/group/newest-cni/serial/Stop 10.49
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
340 TestStartStop/group/newest-cni/serial/SecondStart 38.18
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.5
342 TestNetworkPlugins/group/kindnet/Start 88.45
343 TestNetworkPlugins/group/auto/KubeletFlags 0.31
344 TestNetworkPlugins/group/auto/NetCatPod 14.36
345 TestNetworkPlugins/group/auto/DNS 0.18
346 TestNetworkPlugins/group/auto/Localhost 0.13
347 TestNetworkPlugins/group/auto/HairPin 0.15
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
351 TestStartStop/group/newest-cni/serial/Pause 2.62
352 TestNetworkPlugins/group/calico/Start 97.62
353 TestNetworkPlugins/group/custom-flannel/Start 106.51
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
356 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
357 TestNetworkPlugins/group/kindnet/DNS 0.37
358 TestNetworkPlugins/group/kindnet/Localhost 0.18
359 TestNetworkPlugins/group/kindnet/HairPin 0.16
360 TestNetworkPlugins/group/enable-default-cni/Start 65.69
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/calico/KubeletFlags 0.23
363 TestNetworkPlugins/group/calico/NetCatPod 11.26
364 TestNetworkPlugins/group/calico/DNS 0.17
365 TestNetworkPlugins/group/calico/Localhost 0.15
366 TestNetworkPlugins/group/calico/HairPin 0.16
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.3
369 TestNetworkPlugins/group/custom-flannel/DNS 0.21
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
372 TestNetworkPlugins/group/flannel/Start 63.8
373 TestNetworkPlugins/group/bridge/Start 71.21
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
380 TestNetworkPlugins/group/flannel/ControllerPod 6.01
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
382 TestNetworkPlugins/group/flannel/NetCatPod 12.27
383 TestNetworkPlugins/group/flannel/DNS 0.16
384 TestNetworkPlugins/group/flannel/Localhost 0.14
385 TestNetworkPlugins/group/flannel/HairPin 0.13
386 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
387 TestNetworkPlugins/group/bridge/NetCatPod 11.27
388 TestNetworkPlugins/group/bridge/DNS 0.16
389 TestNetworkPlugins/group/bridge/Localhost 0.13
390 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (9.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-069523 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-069523 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.237025047s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0414 12:18:55.345562 1175746 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0414 12:18:55.345664 1175746 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-069523
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-069523: exit status 85 (71.4308ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-069523 | jenkins | v1.35.0 | 14 Apr 25 12:18 UTC |          |
	|         | -p download-only-069523        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:18:46
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:18:46.155914 1175758 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:18:46.156100 1175758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:18:46.156112 1175758 out.go:358] Setting ErrFile to fd 2...
	I0414 12:18:46.156118 1175758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:18:46.156369 1175758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	W0414 12:18:46.156547 1175758 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20384-1167927/.minikube/config/config.json: open /home/jenkins/minikube-integration/20384-1167927/.minikube/config/config.json: no such file or directory
	I0414 12:18:46.157200 1175758 out.go:352] Setting JSON to true
	I0414 12:18:46.158360 1175758 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14473,"bootTime":1744618653,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:18:46.158500 1175758 start.go:139] virtualization: kvm guest
	I0414 12:18:46.160994 1175758 out.go:97] [download-only-069523] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0414 12:18:46.161204 1175758 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball: no such file or directory
	I0414 12:18:46.161240 1175758 notify.go:220] Checking for updates...
	I0414 12:18:46.162737 1175758 out.go:169] MINIKUBE_LOCATION=20384
	I0414 12:18:46.164436 1175758 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:18:46.165937 1175758 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 12:18:46.167394 1175758 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 12:18:46.169048 1175758 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 12:18:46.172201 1175758 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 12:18:46.172517 1175758 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:18:46.210647 1175758 out.go:97] Using the kvm2 driver based on user configuration
	I0414 12:18:46.210729 1175758 start.go:297] selected driver: kvm2
	I0414 12:18:46.210743 1175758 start.go:901] validating driver "kvm2" against <nil>
	I0414 12:18:46.211181 1175758 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:18:46.211306 1175758 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20384-1167927/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:18:46.230486 1175758 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:18:46.230565 1175758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 12:18:46.231086 1175758 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0414 12:18:46.231241 1175758 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 12:18:46.231283 1175758 cni.go:84] Creating CNI manager for ""
	I0414 12:18:46.231323 1175758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:18:46.231333 1175758 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 12:18:46.231387 1175758 start.go:340] cluster config:
	{Name:download-only-069523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-069523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:18:46.231577 1175758 iso.go:125] acquiring lock: {Name:mk8f809cb461c81c5ffa481f4cedc4ad92252720 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:18:46.233552 1175758 out.go:97] Downloading VM boot image ...
	I0414 12:18:46.233587 1175758 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 12:18:48.838060 1175758 out.go:97] Starting "download-only-069523" primary control-plane node in "download-only-069523" cluster
	I0414 12:18:48.838144 1175758 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 12:18:48.863313 1175758 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 12:18:48.863379 1175758 cache.go:56] Caching tarball of preloaded images
	I0414 12:18:48.863571 1175758 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 12:18:48.865804 1175758 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0414 12:18:48.865838 1175758 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0414 12:18:48.888964 1175758 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-069523 host does not exist
	  To start a cluster, run: "minikube start -p download-only-069523"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (13.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-linux-amd64 delete --all: (13.287993607s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (13.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-069523
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (5.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-415624 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-415624 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.559125697s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (5.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0414 12:19:14.420915 1175746 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0414 12:19:14.420965 1175746 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-415624
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-415624: exit status 85 (71.877901ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-069523 | jenkins | v1.35.0 | 14 Apr 25 12:18 UTC |                     |
	|         | -p download-only-069523        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 14 Apr 25 12:18 UTC | 14 Apr 25 12:19 UTC |
	| delete  | -p download-only-069523        | download-only-069523 | jenkins | v1.35.0 | 14 Apr 25 12:19 UTC | 14 Apr 25 12:19 UTC |
	| start   | -o=json --download-only        | download-only-415624 | jenkins | v1.35.0 | 14 Apr 25 12:19 UTC |                     |
	|         | -p download-only-415624        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 12:19:08
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 12:19:08.911875 1176054 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:19:08.912015 1176054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:19:08.912026 1176054 out.go:358] Setting ErrFile to fd 2...
	I0414 12:19:08.912030 1176054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:19:08.912246 1176054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 12:19:08.912907 1176054 out.go:352] Setting JSON to true
	I0414 12:19:08.914039 1176054 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14496,"bootTime":1744618653,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:19:08.914177 1176054 start.go:139] virtualization: kvm guest
	I0414 12:19:08.916894 1176054 out.go:97] [download-only-415624] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:19:08.917251 1176054 notify.go:220] Checking for updates...
	I0414 12:19:08.919151 1176054 out.go:169] MINIKUBE_LOCATION=20384
	I0414 12:19:08.921496 1176054 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:19:08.923457 1176054 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 12:19:08.925408 1176054 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 12:19:08.927072 1176054 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 12:19:08.930286 1176054 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 12:19:08.930577 1176054 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:19:08.968383 1176054 out.go:97] Using the kvm2 driver based on user configuration
	I0414 12:19:08.968435 1176054 start.go:297] selected driver: kvm2
	I0414 12:19:08.968444 1176054 start.go:901] validating driver "kvm2" against <nil>
	I0414 12:19:08.968867 1176054 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:19:08.968986 1176054 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20384-1167927/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 12:19:08.986857 1176054 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 12:19:08.986921 1176054 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 12:19:08.987477 1176054 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0414 12:19:08.987636 1176054 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 12:19:08.987694 1176054 cni.go:84] Creating CNI manager for ""
	I0414 12:19:08.987738 1176054 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 12:19:08.987749 1176054 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 12:19:08.987802 1176054 start.go:340] cluster config:
	{Name:download-only-415624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-415624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:19:08.987900 1176054 iso.go:125] acquiring lock: {Name:mk8f809cb461c81c5ffa481f4cedc4ad92252720 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 12:19:08.990135 1176054 out.go:97] Starting "download-only-415624" primary control-plane node in "download-only-415624" cluster
	I0414 12:19:08.990178 1176054 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:19:09.060154 1176054 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 12:19:09.060206 1176054 cache.go:56] Caching tarball of preloaded images
	I0414 12:19:09.060401 1176054 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:19:09.062766 1176054 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0414 12:19:09.062812 1176054 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0414 12:19:09.107844 1176054 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:a1ce605168a895ad5f3b3c8db1fe4d66 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 12:19:12.844473 1176054 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0414 12:19:12.844585 1176054 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0414 12:19:13.673845 1176054 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 12:19:13.674229 1176054 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/download-only-415624/config.json ...
	I0414 12:19:13.674263 1176054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/download-only-415624/config.json: {Name:mk054046dcbe1344fe90c0ac8d388d2cc3abdfd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 12:19:13.674427 1176054 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 12:19:13.674570 1176054 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20384-1167927/.minikube/cache/linux/amd64/v1.32.2/kubectl
	
	
	* The control-plane node download-only-415624 host does not exist
	  To start a cluster, run: "minikube start -p download-only-415624"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (13.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-linux-amd64 delete --all: (13.293885554s)
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (13.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-415624
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I0414 12:19:28.252645 1175746 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-104825 --alsologtostderr --binary-mirror http://127.0.0.1:45431 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-104825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-104825
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (61.51s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-789450 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-789450 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.47401134s)
helpers_test.go:175: Cleaning up "offline-crio-789450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-789450
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-789450: (1.031053887s)
--- PASS: TestOffline (61.51s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-809953
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-809953: exit status 85 (64.475331ms)

                                                
                                                
-- stdout --
	* Profile "addons-809953" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-809953"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-809953
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-809953: exit status 85 (63.07858ms)

                                                
                                                
-- stdout --
	* Profile "addons-809953" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-809953"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (139.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-809953 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-809953 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m19.380376345s)
--- PASS: TestAddons/Setup (139.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-809953 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-809953 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.57s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-809953 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-809953 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7ec9aa0c-1b19-4612-8c4a-edfb0a202b47] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7ec9aa0c-1b19-4612-8c4a-edfb0a202b47] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004890374s
addons_test.go:633: (dbg) Run:  kubectl --context addons-809953 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-809953 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-809953 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.57s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.434962ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-zxv7w" [0249f203-5f55-4230-83cb-eaf56a33b5e2] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.020124488s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bxxsm" [46066842-76d1-49e9-8cc5-e7b3e9617fd9] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0035975s
addons_test.go:331: (dbg) Run:  kubectl --context addons-809953 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-809953 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-809953 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.628101523s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 ip
2025/04/14 12:22:27 [DEBUG] GET http://192.168.39.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.57s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dsvwr" [e0f85396-7ede-4456-855a-6a61572ab86c] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004973494s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-809953 addons disable inspektor-gadget --alsologtostderr -v=1: (6.568114738s)
--- PASS: TestAddons/parallel/InspektorGadget (11.57s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.253552ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-fcnxn" [c969a671-fea5-45a8-9791-7229dea7d2c5] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004127502s
addons_test.go:402: (dbg) Run:  kubectl --context addons-809953 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-809953 addons disable metrics-server --alsologtostderr -v=1: (1.445383726s)
--- PASS: TestAddons/parallel/MetricsServer (6.57s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0414 12:22:28.525609 1175746 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0414 12:22:28.532975 1175746 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0414 12:22:28.533016 1175746 kapi.go:107] duration metric: took 7.415298ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.429152ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-809953 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-809953 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3dfa0d24-7bb0-48fa-be16-a54f85218f26] Pending
helpers_test.go:344: "task-pv-pod" [3dfa0d24-7bb0-48fa-be16-a54f85218f26] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3dfa0d24-7bb0-48fa-be16-a54f85218f26] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004444739s
addons_test.go:511: (dbg) Run:  kubectl --context addons-809953 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-809953 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-809953 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-809953 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-809953 delete pod task-pv-pod: (1.073411694s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-809953 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-809953 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-809953 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2cf59810-db26-466e-b1f9-5221096574c8] Pending
helpers_test.go:344: "task-pv-pod-restore" [2cf59810-db26-466e-b1f9-5221096574c8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2cf59810-db26-466e-b1f9-5221096574c8] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004516501s
addons_test.go:553: (dbg) Run:  kubectl --context addons-809953 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-809953 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-809953 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-809953 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.923750588s)
--- PASS: TestAddons/parallel/CSI (58.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-809953 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-809953 --alsologtostderr -v=1: (1.008041882s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-4vcfh" [2f764b84-7d3c-437a-a6d3-c0746ae9054a] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-4vcfh" [2f764b84-7d3c-437a-a6d3-c0746ae9054a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-4vcfh" [2f764b84-7d3c-437a-a6d3-c0746ae9054a] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.004043721s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-809953 addons disable headlamp --alsologtostderr -v=1: (5.956273135s)
--- PASS: TestAddons/parallel/Headlamp (21.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-rwh24" [0d45f085-1856-4177-9c31-2c0a88d83d17] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003922582s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.89s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-809953 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-809953 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809953 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c6778008-01af-4f48-99a5-c4803116a1fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c6778008-01af-4f48-99a5-c4803116a1fd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c6778008-01af-4f48-99a5-c4803116a1fd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004068209s
addons_test.go:906: (dbg) Run:  kubectl --context addons-809953 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 ssh "cat /opt/local-path-provisioner/pvc-d24cdd85-0538-482e-b012-7f14c849e8a6_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-809953 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-809953 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-809953 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.256590185s)
--- PASS: TestAddons/parallel/LocalPath (56.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-r2lh2" [180e8b31-3785-499d-a052-9c37fdd10c40] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004581308s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-pmlc6" [a08507cb-e5ec-42fb-8da1-41bbd74b02f3] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004464787s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-809953 addons disable yakd --alsologtostderr -v=1: (5.98823968s)
--- PASS: TestAddons/parallel/Yakd (11.99s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-809953
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-809953: (1m31.044821701s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-809953
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-809953
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-809953
--- PASS: TestAddons/StoppedEnableDisable (91.40s)

                                                
                                    
x
+
TestCertOptions (52.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-724745 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-724745 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (50.843366295s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-724745 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-724745 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-724745 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-724745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-724745
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-724745: (1.327876977s)
--- PASS: TestCertOptions (52.74s)

                                                
                                    
x
+
TestCertExpiration (307.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-737652 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-737652 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m6.675807254s)
E0414 13:34:40.216953 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:34:57.151884 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-737652 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-737652 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (59.954688411s)
helpers_test.go:175: Cleaning up "cert-expiration-737652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-737652
--- PASS: TestCertExpiration (307.53s)

                                                
                                    
x
+
TestForceSystemdFlag (86.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-902605 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-902605 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.98842034s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-902605 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-902605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-902605
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-902605: (1.090552934s)
--- PASS: TestForceSystemdFlag (86.29s)

                                                
                                    
x
+
TestForceSystemdEnv (69.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-835118 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-835118 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.632458377s)
helpers_test.go:175: Cleaning up "force-systemd-env-835118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-835118
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-835118: (1.065800339s)
--- PASS: TestForceSystemdEnv (69.70s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.1s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0414 13:35:55.984832 1175746 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 13:35:55.984968 1175746 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0414 13:35:56.036750 1175746 install.go:62] docker-machine-driver-kvm2: exit status 1
W0414 13:35:56.036931 1175746 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 13:35:56.037003 1175746 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2213162635/001/docker-machine-driver-kvm2
I0414 13:35:56.294406 1175746 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2213162635/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc00051d928 gz:0xc00051da10 tar:0xc00051d960 tar.bz2:0xc00051d9d0 tar.gz:0xc00051d9e0 tar.xz:0xc00051d9f0 tar.zst:0xc00051da00 tbz2:0xc00051d9d0 tgz:0xc00051d9e0 txz:0xc00051d9f0 tzst:0xc00051da00 xz:0xc00051da18 zip:0xc00051da20 zst:0xc00051da30] Getters:map[file:0xc001ca2280 http:0xc000532780 https:0xc0005327d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 13:35:56.294471 1175746 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2213162635/001/docker-machine-driver-kvm2
I0414 13:35:58.291003 1175746 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 13:35:58.312320 1175746 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 13:35:58.349647 1175746 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0414 13:35:58.349698 1175746 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0414 13:35:58.349801 1175746 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 13:35:58.349841 1175746 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2213162635/002/docker-machine-driver-kvm2
I0414 13:35:58.404545 1175746 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2213162635/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc00051d928 gz:0xc00051da10 tar:0xc00051d960 tar.bz2:0xc00051d9d0 tar.gz:0xc00051d9e0 tar.xz:0xc00051d9f0 tar.zst:0xc00051da00 tbz2:0xc00051d9d0 tgz:0xc00051d9e0 txz:0xc00051d9f0 tzst:0xc00051da00 xz:0xc00051da18 zip:0xc00051da20 zst:0xc00051da30] Getters:map[file:0xc001ca3940 http:0xc000870b40 https:0xc000870b90] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 13:35:58.404620 1175746 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2213162635/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.10s)

                                                
                                    
x
+
TestErrorSpam/setup (43.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-248176 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-248176 --driver=kvm2  --container-runtime=crio
E0414 12:26:49.053115 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:26:49.065073 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:26:49.077207 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:26:49.098728 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:26:49.140247 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:26:49.221737 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:26:49.383405 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:26:49.705269 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:26:50.346711 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:26:51.628958 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:26:54.191888 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:26:59.313435 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-248176 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-248176 --driver=kvm2  --container-runtime=crio: (43.98071196s)
--- PASS: TestErrorSpam/setup (43.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.43s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 start --dry-run
--- PASS: TestErrorSpam/start (0.43s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 pause
E0414 12:27:09.554999 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.02s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 unpause
--- PASS: TestErrorSpam/unpause (2.02s)

                                                
                                    
x
+
TestErrorSpam/stop (6.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 stop: (2.420754613s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 stop: (1.871704495s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-248176 --log_dir /tmp/nospam-248176 stop: (2.036981751s)
--- PASS: TestErrorSpam/stop (6.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20384-1167927/.minikube/files/etc/test/nested/copy/1175746/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-760045 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0414 12:27:30.036810 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:28:10.999159 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-760045 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.730849557s)
--- PASS: TestFunctional/serial/StartWithProxy (55.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0414 12:28:14.804637 1175746 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-760045 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-760045 --alsologtostderr -v=8: (55.513599503s)
functional_test.go:680: soft start took 55.514280342s for "functional-760045" cluster.
I0414 12:29:10.318605 1175746 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (55.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-760045 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-760045 cache add registry.k8s.io/pause:3.1: (1.158108881s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-760045 cache add registry.k8s.io/pause:3.3: (1.229606214s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-760045 cache add registry.k8s.io/pause:latest: (1.35466463s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-760045 /tmp/TestFunctionalserialCacheCmdcacheadd_local1334916834/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 cache add minikube-local-cache-test:functional-760045
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-760045 cache add minikube-local-cache-test:functional-760045: (1.793391802s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 cache delete minikube-local-cache-test:functional-760045
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-760045
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-760045 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (239.661142ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-760045 cache reload: (1.193006902s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 kubectl -- --context functional-760045 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-760045 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-760045 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0414 12:29:32.923529 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-760045 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.047022917s)
functional_test.go:778: restart took 30.047150051s for "functional-760045" cluster.
I0414 12:29:49.133901 1175746 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (30.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-760045 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-760045 logs: (1.543075446s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 logs --file /tmp/TestFunctionalserialLogsFileCmd828191138/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-760045 logs --file /tmp/TestFunctionalserialLogsFileCmd828191138/001/logs.txt: (1.600497526s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-760045 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-760045
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-760045: exit status 115 (307.015652ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.48:30590 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-760045 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-760045 config get cpus: exit status 14 (92.811629ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-760045 config get cpus: exit status 14 (67.692007ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (103.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-760045 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-760045 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1183792: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (103.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-760045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-760045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (169.312462ms)

                                                
                                                
-- stdout --
	* [functional-760045] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 12:30:43.373808 1183644 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:30:43.374123 1183644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:30:43.374135 1183644 out.go:358] Setting ErrFile to fd 2...
	I0414 12:30:43.374140 1183644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:30:43.374394 1183644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 12:30:43.375091 1183644 out.go:352] Setting JSON to false
	I0414 12:30:43.376472 1183644 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15190,"bootTime":1744618653,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:30:43.376574 1183644 start.go:139] virtualization: kvm guest
	I0414 12:30:43.378766 1183644 out.go:177] * [functional-760045] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 12:30:43.380525 1183644 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 12:30:43.380536 1183644 notify.go:220] Checking for updates...
	I0414 12:30:43.384009 1183644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:30:43.385973 1183644 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 12:30:43.387722 1183644 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 12:30:43.389665 1183644 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:30:43.391391 1183644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:30:43.393657 1183644 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:30:43.394103 1183644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:30:43.394247 1183644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:30:43.413366 1183644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0414 12:30:43.413886 1183644 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:30:43.414603 1183644 main.go:141] libmachine: Using API Version  1
	I0414 12:30:43.414636 1183644 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:30:43.415058 1183644 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:30:43.415277 1183644 main.go:141] libmachine: (functional-760045) Calling .DriverName
	I0414 12:30:43.415579 1183644 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:30:43.415958 1183644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:30:43.416012 1183644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:30:43.432965 1183644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44017
	I0414 12:30:43.433517 1183644 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:30:43.434073 1183644 main.go:141] libmachine: Using API Version  1
	I0414 12:30:43.434112 1183644 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:30:43.434510 1183644 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:30:43.434736 1183644 main.go:141] libmachine: (functional-760045) Calling .DriverName
	I0414 12:30:43.479799 1183644 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 12:30:43.481277 1183644 start.go:297] selected driver: kvm2
	I0414 12:30:43.481304 1183644 start.go:901] validating driver "kvm2" against &{Name:functional-760045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-760045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:30:43.481419 1183644 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:30:43.484325 1183644 out.go:201] 
	W0414 12:30:43.485990 1183644 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0414 12:30:43.487602 1183644 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-760045 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-760045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-760045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (178.374813ms)

                                                
                                                
-- stdout --
	* [functional-760045] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 12:30:43.201147 1183586 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:30:43.201277 1183586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:30:43.201284 1183586 out.go:358] Setting ErrFile to fd 2...
	I0414 12:30:43.201291 1183586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:30:43.201625 1183586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 12:30:43.202295 1183586 out.go:352] Setting JSON to false
	I0414 12:30:43.203649 1183586 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15190,"bootTime":1744618653,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 12:30:43.203758 1183586 start.go:139] virtualization: kvm guest
	I0414 12:30:43.206062 1183586 out.go:177] * [functional-760045] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0414 12:30:43.207775 1183586 notify.go:220] Checking for updates...
	I0414 12:30:43.209395 1183586 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 12:30:43.211130 1183586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 12:30:43.212756 1183586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 12:30:43.214168 1183586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 12:30:43.215813 1183586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 12:30:43.217377 1183586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 12:30:43.219357 1183586 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:30:43.219903 1183586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:30:43.219976 1183586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:30:43.243561 1183586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0414 12:30:43.244167 1183586 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:30:43.244804 1183586 main.go:141] libmachine: Using API Version  1
	I0414 12:30:43.244826 1183586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:30:43.245292 1183586 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:30:43.245573 1183586 main.go:141] libmachine: (functional-760045) Calling .DriverName
	I0414 12:30:43.245997 1183586 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 12:30:43.246510 1183586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:30:43.246578 1183586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:30:43.262981 1183586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34309
	I0414 12:30:43.263597 1183586 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:30:43.264262 1183586 main.go:141] libmachine: Using API Version  1
	I0414 12:30:43.264286 1183586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:30:43.264687 1183586 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:30:43.264879 1183586 main.go:141] libmachine: (functional-760045) Calling .DriverName
	I0414 12:30:43.309302 1183586 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0414 12:30:43.311366 1183586 start.go:297] selected driver: kvm2
	I0414 12:30:43.311392 1183586 start.go:901] validating driver "kvm2" against &{Name:functional-760045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-760045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 12:30:43.311514 1183586 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 12:30:43.314418 1183586 out.go:201] 
	W0414 12:30:43.316275 1183586 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0414 12:30:43.318079 1183586 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (43.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-760045 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-760045 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-klbg6" [3411c5b6-bfba-4c65-b22a-1375ad82b576] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-klbg6" [3411c5b6-bfba-4c65-b22a-1375ad82b576] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 43.005287279s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.48:31789
functional_test.go:1692: http://192.168.39.48:31789: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-klbg6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.48:31789
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (43.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh -n functional-760045 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 cp functional-760045:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3367036125/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh -n functional-760045 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh -n functional-760045 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1175746/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "sudo cat /etc/test/nested/copy/1175746/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1175746.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "sudo cat /etc/ssl/certs/1175746.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1175746.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "sudo cat /usr/share/ca-certificates/1175746.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/11757462.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "sudo cat /etc/ssl/certs/11757462.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/11757462.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "sudo cat /usr/share/ca-certificates/11757462.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-760045 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-760045 ssh "sudo systemctl is-active docker": exit status 1 (211.211648ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-760045 ssh "sudo systemctl is-active containerd": exit status 1 (213.682885ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-760045 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-760045 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-760045 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-760045 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1182321: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-760045 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (42.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-760045 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-760045 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-m9jsj" [78c2bfff-6f14-4194-b794-ee10b06219da] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-m9jsj" [78c2bfff-6f14-4194-b794-ee10b06219da] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 42.004607719s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (42.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 service list -o json
functional_test.go:1511: Took "461.380386ms" to run "out/minikube-linux-amd64 -p functional-760045 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.48:30271
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "334.711636ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "59.4699ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "309.561568ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "60.565969ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.48:30271
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (35.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-760045 /tmp/TestFunctionalparallelMountCmdany-port1165986784/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744633842115471536" to /tmp/TestFunctionalparallelMountCmdany-port1165986784/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744633842115471536" to /tmp/TestFunctionalparallelMountCmdany-port1165986784/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744633842115471536" to /tmp/TestFunctionalparallelMountCmdany-port1165986784/001/test-1744633842115471536
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-760045 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.387654ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 12:30:42.355214 1175746 retry.go:31] will retry after 613.108985ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 14 12:30 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 14 12:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 14 12:30 test-1744633842115471536
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh cat /mount-9p/test-1744633842115471536
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-760045 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7cb6157d-76c5-48e2-a0f6-eb478bf84611] Pending
helpers_test.go:344: "busybox-mount" [7cb6157d-76c5-48e2-a0f6-eb478bf84611] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7cb6157d-76c5-48e2-a0f6-eb478bf84611] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7cb6157d-76c5-48e2-a0f6-eb478bf84611] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 33.006454134s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-760045 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-760045 /tmp/TestFunctionalparallelMountCmdany-port1165986784/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (35.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-760045 /tmp/TestFunctionalparallelMountCmdspecific-port1768808178/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-760045 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (217.347586ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 12:31:18.063791 1175746 retry.go:31] will retry after 597.791157ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-760045 /tmp/TestFunctionalparallelMountCmdspecific-port1768808178/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-760045 ssh "sudo umount -f /mount-9p": exit status 1 (209.09279ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-760045 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-760045 /tmp/TestFunctionalparallelMountCmdspecific-port1768808178/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-760045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1235649276/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-760045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1235649276/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-760045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1235649276/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-760045 ssh "findmnt -T" /mount1: exit status 1 (245.756918ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 12:31:19.947053 1175746 retry.go:31] will retry after 273.821875ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-760045 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-760045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1235649276/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-760045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1235649276/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-760045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1235649276/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-760045 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-760045
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-760045 image ls --format short --alsologtostderr:
I0414 12:32:28.106836 1185290 out.go:345] Setting OutFile to fd 1 ...
I0414 12:32:28.107008 1185290 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 12:32:28.107022 1185290 out.go:358] Setting ErrFile to fd 2...
I0414 12:32:28.107029 1185290 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 12:32:28.107257 1185290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
I0414 12:32:28.108012 1185290 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 12:32:28.108131 1185290 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 12:32:28.108470 1185290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 12:32:28.108537 1185290 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 12:32:28.125580 1185290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
I0414 12:32:28.126309 1185290 main.go:141] libmachine: () Calling .GetVersion
I0414 12:32:28.127146 1185290 main.go:141] libmachine: Using API Version  1
I0414 12:32:28.127187 1185290 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 12:32:28.127760 1185290 main.go:141] libmachine: () Calling .GetMachineName
I0414 12:32:28.128109 1185290 main.go:141] libmachine: (functional-760045) Calling .GetState
I0414 12:32:28.130408 1185290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 12:32:28.130478 1185290 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 12:32:28.147329 1185290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41967
I0414 12:32:28.147922 1185290 main.go:141] libmachine: () Calling .GetVersion
I0414 12:32:28.148405 1185290 main.go:141] libmachine: Using API Version  1
I0414 12:32:28.148423 1185290 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 12:32:28.148877 1185290 main.go:141] libmachine: () Calling .GetMachineName
I0414 12:32:28.149072 1185290 main.go:141] libmachine: (functional-760045) Calling .DriverName
I0414 12:32:28.149312 1185290 ssh_runner.go:195] Run: systemctl --version
I0414 12:32:28.149342 1185290 main.go:141] libmachine: (functional-760045) Calling .GetSSHHostname
I0414 12:32:28.152373 1185290 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined MAC address 52:54:00:36:e1:9b in network mk-functional-760045
I0414 12:32:28.152990 1185290 main.go:141] libmachine: (functional-760045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e1:9b", ip: ""} in network mk-functional-760045: {Iface:virbr1 ExpiryTime:2025-04-14 13:27:34 +0000 UTC Type:0 Mac:52:54:00:36:e1:9b Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:functional-760045 Clientid:01:52:54:00:36:e1:9b}
I0414 12:32:28.153038 1185290 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined IP address 192.168.39.48 and MAC address 52:54:00:36:e1:9b in network mk-functional-760045
I0414 12:32:28.153269 1185290 main.go:141] libmachine: (functional-760045) Calling .GetSSHPort
I0414 12:32:28.153505 1185290 main.go:141] libmachine: (functional-760045) Calling .GetSSHKeyPath
I0414 12:32:28.153704 1185290 main.go:141] libmachine: (functional-760045) Calling .GetSSHUsername
I0414 12:32:28.153860 1185290 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/functional-760045/id_rsa Username:docker}
I0414 12:32:28.231088 1185290 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 12:32:28.289213 1185290 main.go:141] libmachine: Making call to close driver server
I0414 12:32:28.289229 1185290 main.go:141] libmachine: (functional-760045) Calling .Close
I0414 12:32:28.289679 1185290 main.go:141] libmachine: (functional-760045) DBG | Closing plugin on server side
I0414 12:32:28.289746 1185290 main.go:141] libmachine: Successfully made call to close driver server
I0414 12:32:28.289763 1185290 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 12:32:28.289777 1185290 main.go:141] libmachine: Making call to close driver server
I0414 12:32:28.289788 1185290 main.go:141] libmachine: (functional-760045) Calling .Close
I0414 12:32:28.290064 1185290 main.go:141] libmachine: Successfully made call to close driver server
I0414 12:32:28.290087 1185290 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-760045 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| localhost/minikube-local-cache-test     | functional-760045  | 7daf889ca940b | 3.33kB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/my-image                      | functional-760045  | 97faf2c166b05 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-760045 image ls --format table --alsologtostderr:
I0414 12:32:32.258267 1185441 out.go:345] Setting OutFile to fd 1 ...
I0414 12:32:32.258604 1185441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 12:32:32.258616 1185441 out.go:358] Setting ErrFile to fd 2...
I0414 12:32:32.258622 1185441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 12:32:32.259331 1185441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
I0414 12:32:32.260688 1185441 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 12:32:32.260888 1185441 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 12:32:32.261488 1185441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 12:32:32.261576 1185441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 12:32:32.277642 1185441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
I0414 12:32:32.278152 1185441 main.go:141] libmachine: () Calling .GetVersion
I0414 12:32:32.278714 1185441 main.go:141] libmachine: Using API Version  1
I0414 12:32:32.278738 1185441 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 12:32:32.279132 1185441 main.go:141] libmachine: () Calling .GetMachineName
I0414 12:32:32.279340 1185441 main.go:141] libmachine: (functional-760045) Calling .GetState
I0414 12:32:32.281475 1185441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 12:32:32.281524 1185441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 12:32:32.297582 1185441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45065
I0414 12:32:32.298283 1185441 main.go:141] libmachine: () Calling .GetVersion
I0414 12:32:32.298973 1185441 main.go:141] libmachine: Using API Version  1
I0414 12:32:32.299013 1185441 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 12:32:32.299566 1185441 main.go:141] libmachine: () Calling .GetMachineName
I0414 12:32:32.299855 1185441 main.go:141] libmachine: (functional-760045) Calling .DriverName
I0414 12:32:32.300124 1185441 ssh_runner.go:195] Run: systemctl --version
I0414 12:32:32.300161 1185441 main.go:141] libmachine: (functional-760045) Calling .GetSSHHostname
I0414 12:32:32.303569 1185441 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined MAC address 52:54:00:36:e1:9b in network mk-functional-760045
I0414 12:32:32.304214 1185441 main.go:141] libmachine: (functional-760045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e1:9b", ip: ""} in network mk-functional-760045: {Iface:virbr1 ExpiryTime:2025-04-14 13:27:34 +0000 UTC Type:0 Mac:52:54:00:36:e1:9b Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:functional-760045 Clientid:01:52:54:00:36:e1:9b}
I0414 12:32:32.304244 1185441 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined IP address 192.168.39.48 and MAC address 52:54:00:36:e1:9b in network mk-functional-760045
I0414 12:32:32.304467 1185441 main.go:141] libmachine: (functional-760045) Calling .GetSSHPort
I0414 12:32:32.304746 1185441 main.go:141] libmachine: (functional-760045) Calling .GetSSHKeyPath
I0414 12:32:32.304954 1185441 main.go:141] libmachine: (functional-760045) Calling .GetSSHUsername
I0414 12:32:32.305128 1185441 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/functional-760045/id_rsa Username:docker}
I0414 12:32:32.391281 1185441 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 12:32:32.457436 1185441 main.go:141] libmachine: Making call to close driver server
I0414 12:32:32.457460 1185441 main.go:141] libmachine: (functional-760045) Calling .Close
I0414 12:32:32.457832 1185441 main.go:141] libmachine: Successfully made call to close driver server
I0414 12:32:32.457885 1185441 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 12:32:32.457901 1185441 main.go:141] libmachine: Making call to close driver server
I0414 12:32:32.457914 1185441 main.go:141] libmachine: (functional-760045) Calling .Close
I0414 12:32:32.457944 1185441 main.go:141] libmachine: (functional-760045) DBG | Closing plugin on server side
I0414 12:32:32.458288 1185441 main.go:141] libmachine: (functional-760045) DBG | Closing plugin on server side
I0414 12:32:32.458325 1185441 main.go:141] libmachine: Successfully made call to close driver server
I0414 12:32:32.458338 1185441 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-760045 image ls --format json --alsologtostderr:
[{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e
98765c"],"repoTags":[],"size":"43824855"},{"id":"3d3bdf8608bc555d7d38e1ff5de7277a46a017803762094609afaef7eff4bd4c","repoDigests":["docker.io/library/cf3786224d0c85ddfea6311f3770004e24f4983530949902930d2ae1401e3c2d-tmp@sha256:f4487387267ebfb6a92a6c4616900de8064be90d14c0f57dbebc0f99d15ca22b"],"repoTags":[],"size":"1466018"},{"id":"97faf2c166b0567f952308e647877f9c2aeb48d1aa80067526099c8729b2716d","repoDigests":["localhost/my-image@sha256:ba5d91826549503b8209109b4fb7de1f19fa6648ae980769331788e4cfc9001f"],"repoTags":["localhost/my-image:functional-760045"],"size":"1468600"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[
"registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39
a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956f
a8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad
8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"7daf889ca940b1fa14bfc2000518e1e76f185793e60902cb28c351a604788bbf","repoDigests":["localhost/minikube-local-cache-test@sha256:ffe8966cffe3a8a388cdebf54ba0ffccbb609f7973ba9607b9435cd65d1c5d27"],"repoTags":["localhost/minikube-local-cache-test:functional-760045"],"size":"3330"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":
["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-760045 image ls --format json --alsologtostderr:
I0414 12:32:32.517518 1185465 out.go:345] Setting OutFile to fd 1 ...
I0414 12:32:32.517795 1185465 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 12:32:32.517807 1185465 out.go:358] Setting ErrFile to fd 2...
I0414 12:32:32.517814 1185465 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 12:32:32.518103 1185465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
I0414 12:32:32.518757 1185465 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 12:32:32.518897 1185465 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 12:32:32.519355 1185465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 12:32:32.519444 1185465 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 12:32:32.536616 1185465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35919
I0414 12:32:32.537282 1185465 main.go:141] libmachine: () Calling .GetVersion
I0414 12:32:32.537899 1185465 main.go:141] libmachine: Using API Version  1
I0414 12:32:32.537940 1185465 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 12:32:32.538380 1185465 main.go:141] libmachine: () Calling .GetMachineName
I0414 12:32:32.538603 1185465 main.go:141] libmachine: (functional-760045) Calling .GetState
I0414 12:32:32.541099 1185465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 12:32:32.541174 1185465 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 12:32:32.558122 1185465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36089
I0414 12:32:32.558705 1185465 main.go:141] libmachine: () Calling .GetVersion
I0414 12:32:32.559328 1185465 main.go:141] libmachine: Using API Version  1
I0414 12:32:32.559364 1185465 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 12:32:32.559795 1185465 main.go:141] libmachine: () Calling .GetMachineName
I0414 12:32:32.560031 1185465 main.go:141] libmachine: (functional-760045) Calling .DriverName
I0414 12:32:32.560293 1185465 ssh_runner.go:195] Run: systemctl --version
I0414 12:32:32.560323 1185465 main.go:141] libmachine: (functional-760045) Calling .GetSSHHostname
I0414 12:32:32.563609 1185465 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined MAC address 52:54:00:36:e1:9b in network mk-functional-760045
I0414 12:32:32.564118 1185465 main.go:141] libmachine: (functional-760045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e1:9b", ip: ""} in network mk-functional-760045: {Iface:virbr1 ExpiryTime:2025-04-14 13:27:34 +0000 UTC Type:0 Mac:52:54:00:36:e1:9b Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:functional-760045 Clientid:01:52:54:00:36:e1:9b}
I0414 12:32:32.564165 1185465 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined IP address 192.168.39.48 and MAC address 52:54:00:36:e1:9b in network mk-functional-760045
I0414 12:32:32.564374 1185465 main.go:141] libmachine: (functional-760045) Calling .GetSSHPort
I0414 12:32:32.564613 1185465 main.go:141] libmachine: (functional-760045) Calling .GetSSHKeyPath
I0414 12:32:32.564820 1185465 main.go:141] libmachine: (functional-760045) Calling .GetSSHUsername
I0414 12:32:32.564989 1185465 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/functional-760045/id_rsa Username:docker}
I0414 12:32:32.647394 1185465 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 12:32:32.695952 1185465 main.go:141] libmachine: Making call to close driver server
I0414 12:32:32.695966 1185465 main.go:141] libmachine: (functional-760045) Calling .Close
I0414 12:32:32.696328 1185465 main.go:141] libmachine: Successfully made call to close driver server
I0414 12:32:32.696353 1185465 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 12:32:32.696361 1185465 main.go:141] libmachine: (functional-760045) DBG | Closing plugin on server side
I0414 12:32:32.696377 1185465 main.go:141] libmachine: Making call to close driver server
I0414 12:32:32.696389 1185465 main.go:141] libmachine: (functional-760045) Calling .Close
I0414 12:32:32.696691 1185465 main.go:141] libmachine: Successfully made call to close driver server
I0414 12:32:32.696710 1185465 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-760045 image ls --format yaml --alsologtostderr:
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 7daf889ca940b1fa14bfc2000518e1e76f185793e60902cb28c351a604788bbf
repoDigests:
- localhost/minikube-local-cache-test@sha256:ffe8966cffe3a8a388cdebf54ba0ffccbb609f7973ba9607b9435cd65d1c5d27
repoTags:
- localhost/minikube-local-cache-test:functional-760045
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-760045 image ls --format yaml --alsologtostderr:
I0414 12:32:28.348484 1185314 out.go:345] Setting OutFile to fd 1 ...
I0414 12:32:28.348766 1185314 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 12:32:28.348775 1185314 out.go:358] Setting ErrFile to fd 2...
I0414 12:32:28.348779 1185314 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 12:32:28.349529 1185314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
I0414 12:32:28.351248 1185314 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 12:32:28.351391 1185314 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 12:32:28.351798 1185314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 12:32:28.351873 1185314 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 12:32:28.368151 1185314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
I0414 12:32:28.368713 1185314 main.go:141] libmachine: () Calling .GetVersion
I0414 12:32:28.369310 1185314 main.go:141] libmachine: Using API Version  1
I0414 12:32:28.369340 1185314 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 12:32:28.369790 1185314 main.go:141] libmachine: () Calling .GetMachineName
I0414 12:32:28.370088 1185314 main.go:141] libmachine: (functional-760045) Calling .GetState
I0414 12:32:28.372498 1185314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 12:32:28.372552 1185314 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 12:32:28.388449 1185314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38479
I0414 12:32:28.389001 1185314 main.go:141] libmachine: () Calling .GetVersion
I0414 12:32:28.389573 1185314 main.go:141] libmachine: Using API Version  1
I0414 12:32:28.389597 1185314 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 12:32:28.390131 1185314 main.go:141] libmachine: () Calling .GetMachineName
I0414 12:32:28.390408 1185314 main.go:141] libmachine: (functional-760045) Calling .DriverName
I0414 12:32:28.390714 1185314 ssh_runner.go:195] Run: systemctl --version
I0414 12:32:28.390748 1185314 main.go:141] libmachine: (functional-760045) Calling .GetSSHHostname
I0414 12:32:28.394683 1185314 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined MAC address 52:54:00:36:e1:9b in network mk-functional-760045
I0414 12:32:28.395180 1185314 main.go:141] libmachine: (functional-760045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e1:9b", ip: ""} in network mk-functional-760045: {Iface:virbr1 ExpiryTime:2025-04-14 13:27:34 +0000 UTC Type:0 Mac:52:54:00:36:e1:9b Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:functional-760045 Clientid:01:52:54:00:36:e1:9b}
I0414 12:32:28.395231 1185314 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined IP address 192.168.39.48 and MAC address 52:54:00:36:e1:9b in network mk-functional-760045
I0414 12:32:28.395508 1185314 main.go:141] libmachine: (functional-760045) Calling .GetSSHPort
I0414 12:32:28.395780 1185314 main.go:141] libmachine: (functional-760045) Calling .GetSSHKeyPath
I0414 12:32:28.395987 1185314 main.go:141] libmachine: (functional-760045) Calling .GetSSHUsername
I0414 12:32:28.396136 1185314 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/functional-760045/id_rsa Username:docker}
I0414 12:32:28.474827 1185314 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 12:32:28.514070 1185314 main.go:141] libmachine: Making call to close driver server
I0414 12:32:28.514090 1185314 main.go:141] libmachine: (functional-760045) Calling .Close
I0414 12:32:28.514445 1185314 main.go:141] libmachine: (functional-760045) DBG | Closing plugin on server side
I0414 12:32:28.514479 1185314 main.go:141] libmachine: Successfully made call to close driver server
I0414 12:32:28.514514 1185314 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 12:32:28.514536 1185314 main.go:141] libmachine: Making call to close driver server
I0414 12:32:28.514548 1185314 main.go:141] libmachine: (functional-760045) Calling .Close
I0414 12:32:28.514850 1185314 main.go:141] libmachine: (functional-760045) DBG | Closing plugin on server side
I0414 12:32:28.514855 1185314 main.go:141] libmachine: Successfully made call to close driver server
I0414 12:32:28.514887 1185314 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-760045 ssh pgrep buildkitd: exit status 1 (206.596819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image build -t localhost/my-image:functional-760045 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-760045 image build -t localhost/my-image:functional-760045 testdata/build --alsologtostderr: (3.251803855s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-760045 image build -t localhost/my-image:functional-760045 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3d3bdf8608b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-760045
--> 97faf2c166b
Successfully tagged localhost/my-image:functional-760045
97faf2c166b0567f952308e647877f9c2aeb48d1aa80067526099c8729b2716d
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-760045 image build -t localhost/my-image:functional-760045 testdata/build --alsologtostderr:
I0414 12:32:28.778693 1185376 out.go:345] Setting OutFile to fd 1 ...
I0414 12:32:28.778993 1185376 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 12:32:28.779005 1185376 out.go:358] Setting ErrFile to fd 2...
I0414 12:32:28.779010 1185376 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 12:32:28.779253 1185376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
I0414 12:32:28.780000 1185376 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 12:32:28.780710 1185376 config.go:182] Loaded profile config "functional-760045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 12:32:28.781145 1185376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 12:32:28.781202 1185376 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 12:32:28.798988 1185376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
I0414 12:32:28.799850 1185376 main.go:141] libmachine: () Calling .GetVersion
I0414 12:32:28.800481 1185376 main.go:141] libmachine: Using API Version  1
I0414 12:32:28.800519 1185376 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 12:32:28.801055 1185376 main.go:141] libmachine: () Calling .GetMachineName
I0414 12:32:28.801335 1185376 main.go:141] libmachine: (functional-760045) Calling .GetState
I0414 12:32:28.804013 1185376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 12:32:28.804089 1185376 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 12:32:28.822486 1185376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
I0414 12:32:28.823140 1185376 main.go:141] libmachine: () Calling .GetVersion
I0414 12:32:28.823759 1185376 main.go:141] libmachine: Using API Version  1
I0414 12:32:28.823786 1185376 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 12:32:28.824211 1185376 main.go:141] libmachine: () Calling .GetMachineName
I0414 12:32:28.824422 1185376 main.go:141] libmachine: (functional-760045) Calling .DriverName
I0414 12:32:28.824627 1185376 ssh_runner.go:195] Run: systemctl --version
I0414 12:32:28.824651 1185376 main.go:141] libmachine: (functional-760045) Calling .GetSSHHostname
I0414 12:32:28.827626 1185376 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined MAC address 52:54:00:36:e1:9b in network mk-functional-760045
I0414 12:32:28.828118 1185376 main.go:141] libmachine: (functional-760045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e1:9b", ip: ""} in network mk-functional-760045: {Iface:virbr1 ExpiryTime:2025-04-14 13:27:34 +0000 UTC Type:0 Mac:52:54:00:36:e1:9b Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:functional-760045 Clientid:01:52:54:00:36:e1:9b}
I0414 12:32:28.828170 1185376 main.go:141] libmachine: (functional-760045) DBG | domain functional-760045 has defined IP address 192.168.39.48 and MAC address 52:54:00:36:e1:9b in network mk-functional-760045
I0414 12:32:28.828306 1185376 main.go:141] libmachine: (functional-760045) Calling .GetSSHPort
I0414 12:32:28.828604 1185376 main.go:141] libmachine: (functional-760045) Calling .GetSSHKeyPath
I0414 12:32:28.828810 1185376 main.go:141] libmachine: (functional-760045) Calling .GetSSHUsername
I0414 12:32:28.828985 1185376 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/functional-760045/id_rsa Username:docker}
I0414 12:32:28.906000 1185376 build_images.go:161] Building image from path: /tmp/build.718968294.tar
I0414 12:32:28.906076 1185376 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0414 12:32:28.916793 1185376 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.718968294.tar
I0414 12:32:28.922265 1185376 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.718968294.tar: stat -c "%s %y" /var/lib/minikube/build/build.718968294.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.718968294.tar': No such file or directory
I0414 12:32:28.922313 1185376 ssh_runner.go:362] scp /tmp/build.718968294.tar --> /var/lib/minikube/build/build.718968294.tar (3072 bytes)
I0414 12:32:28.950625 1185376 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.718968294
I0414 12:32:28.961877 1185376 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.718968294 -xf /var/lib/minikube/build/build.718968294.tar
I0414 12:32:28.972940 1185376 crio.go:315] Building image: /var/lib/minikube/build/build.718968294
I0414 12:32:28.973049 1185376 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-760045 /var/lib/minikube/build/build.718968294 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0414 12:32:31.939257 1185376 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-760045 /var/lib/minikube/build/build.718968294 --cgroup-manager=cgroupfs: (2.966176904s)
I0414 12:32:31.939379 1185376 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.718968294
I0414 12:32:31.962040 1185376 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.718968294.tar
I0414 12:32:31.973417 1185376 build_images.go:217] Built localhost/my-image:functional-760045 from /tmp/build.718968294.tar
I0414 12:32:31.973461 1185376 build_images.go:133] succeeded building to: functional-760045
I0414 12:32:31.973465 1185376 build_images.go:134] failed building to: 
I0414 12:32:31.973498 1185376 main.go:141] libmachine: Making call to close driver server
I0414 12:32:31.973510 1185376 main.go:141] libmachine: (functional-760045) Calling .Close
I0414 12:32:31.973853 1185376 main.go:141] libmachine: Successfully made call to close driver server
I0414 12:32:31.973875 1185376 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 12:32:31.973883 1185376 main.go:141] libmachine: Making call to close driver server
I0414 12:32:31.973891 1185376 main.go:141] libmachine: (functional-760045) Calling .Close
I0414 12:32:31.973893 1185376 main.go:141] libmachine: (functional-760045) DBG | Closing plugin on server side
I0414 12:32:31.974320 1185376 main.go:141] libmachine: (functional-760045) DBG | Closing plugin on server side
I0414 12:32:31.974320 1185376 main.go:141] libmachine: Successfully made call to close driver server
I0414 12:32:31.974362 1185376 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image rm kicbase/echo-server:functional-760045 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-760045 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-760045 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
E0414 12:36:49.053077 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-760045
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-760045
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-760045
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-185201 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 12:41:49.053734 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:43:12.127916 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-185201 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m25.448439579s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 status -v=7 --alsologtostderr
E0414 12:44:57.146030 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:44:57.152730 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:44:57.164737 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:44:57.186839 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:44:57.228889 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:44:57.311146 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:44:57.472760 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/StartCluster (206.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
E0414 12:44:57.794533 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- rollout status deployment/busybox
E0414 12:44:58.436321 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:44:59.718584 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:45:02.280395 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-185201 -- rollout status deployment/busybox: (6.09015824s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-hj95g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-j9r8s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-l6qnl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-hj95g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-j9r8s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-l6qnl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-hj95g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-j9r8s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-l6qnl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-hj95g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-hj95g -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-j9r8s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-j9r8s -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-l6qnl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-185201 -- exec busybox-58667487b6-l6qnl -- sh -c "ping -c 1 192.168.39.1"
E0414 12:45:07.402337 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-185201 -v=7 --alsologtostderr
E0414 12:45:17.643864 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:45:38.126105 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-185201 -v=7 --alsologtostderr: (1m0.715750639s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-185201 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp testdata/cp-test.txt ha-185201:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile117660223/001/cp-test_ha-185201.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201:/home/docker/cp-test.txt ha-185201-m02:/home/docker/cp-test_ha-185201_ha-185201-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m02 "sudo cat /home/docker/cp-test_ha-185201_ha-185201-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201:/home/docker/cp-test.txt ha-185201-m03:/home/docker/cp-test_ha-185201_ha-185201-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m03 "sudo cat /home/docker/cp-test_ha-185201_ha-185201-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201:/home/docker/cp-test.txt ha-185201-m04:/home/docker/cp-test_ha-185201_ha-185201-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m04 "sudo cat /home/docker/cp-test_ha-185201_ha-185201-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp testdata/cp-test.txt ha-185201-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile117660223/001/cp-test_ha-185201-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m02:/home/docker/cp-test.txt ha-185201:/home/docker/cp-test_ha-185201-m02_ha-185201.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201 "sudo cat /home/docker/cp-test_ha-185201-m02_ha-185201.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m02:/home/docker/cp-test.txt ha-185201-m03:/home/docker/cp-test_ha-185201-m02_ha-185201-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m03 "sudo cat /home/docker/cp-test_ha-185201-m02_ha-185201-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m02:/home/docker/cp-test.txt ha-185201-m04:/home/docker/cp-test_ha-185201-m02_ha-185201-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m04 "sudo cat /home/docker/cp-test_ha-185201-m02_ha-185201-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp testdata/cp-test.txt ha-185201-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile117660223/001/cp-test_ha-185201-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m03:/home/docker/cp-test.txt ha-185201:/home/docker/cp-test_ha-185201-m03_ha-185201.txt
E0414 12:46:19.087765 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201 "sudo cat /home/docker/cp-test_ha-185201-m03_ha-185201.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m03:/home/docker/cp-test.txt ha-185201-m02:/home/docker/cp-test_ha-185201-m03_ha-185201-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m02 "sudo cat /home/docker/cp-test_ha-185201-m03_ha-185201-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m03:/home/docker/cp-test.txt ha-185201-m04:/home/docker/cp-test_ha-185201-m03_ha-185201-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m04 "sudo cat /home/docker/cp-test_ha-185201-m03_ha-185201-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp testdata/cp-test.txt ha-185201-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile117660223/001/cp-test_ha-185201-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m04:/home/docker/cp-test.txt ha-185201:/home/docker/cp-test_ha-185201-m04_ha-185201.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201 "sudo cat /home/docker/cp-test_ha-185201-m04_ha-185201.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m04:/home/docker/cp-test.txt ha-185201-m02:/home/docker/cp-test_ha-185201-m04_ha-185201-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m02 "sudo cat /home/docker/cp-test_ha-185201-m04_ha-185201-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 cp ha-185201-m04:/home/docker/cp-test.txt ha-185201-m03:/home/docker/cp-test_ha-185201-m04_ha-185201-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 ssh -n ha-185201-m03 "sudo cat /home/docker/cp-test_ha-185201-m04_ha-185201-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 node stop m02 -v=7 --alsologtostderr
E0414 12:46:49.053007 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:47:41.009754 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-185201 node stop m02 -v=7 --alsologtostderr: (1m30.857926238s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-185201 status -v=7 --alsologtostderr: exit status 7 (701.554983ms)

                                                
                                                
-- stdout --
	ha-185201
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-185201-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-185201-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-185201-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 12:47:55.675736 1192373 out.go:345] Setting OutFile to fd 1 ...
	I0414 12:47:55.676000 1192373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:47:55.676008 1192373 out.go:358] Setting ErrFile to fd 2...
	I0414 12:47:55.676012 1192373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 12:47:55.676209 1192373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 12:47:55.676399 1192373 out.go:352] Setting JSON to false
	I0414 12:47:55.676433 1192373 mustload.go:65] Loading cluster: ha-185201
	I0414 12:47:55.676539 1192373 notify.go:220] Checking for updates...
	I0414 12:47:55.676997 1192373 config.go:182] Loaded profile config "ha-185201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 12:47:55.677042 1192373 status.go:174] checking status of ha-185201 ...
	I0414 12:47:55.677612 1192373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:47:55.677671 1192373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:47:55.700488 1192373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43449
	I0414 12:47:55.701189 1192373 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:47:55.702018 1192373 main.go:141] libmachine: Using API Version  1
	I0414 12:47:55.702053 1192373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:47:55.702502 1192373 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:47:55.702764 1192373 main.go:141] libmachine: (ha-185201) Calling .GetState
	I0414 12:47:55.704804 1192373 status.go:371] ha-185201 host status = "Running" (err=<nil>)
	I0414 12:47:55.704830 1192373 host.go:66] Checking if "ha-185201" exists ...
	I0414 12:47:55.705248 1192373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:47:55.705314 1192373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:47:55.723063 1192373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38933
	I0414 12:47:55.723587 1192373 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:47:55.724275 1192373 main.go:141] libmachine: Using API Version  1
	I0414 12:47:55.724315 1192373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:47:55.724823 1192373 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:47:55.725066 1192373 main.go:141] libmachine: (ha-185201) Calling .GetIP
	I0414 12:47:55.728720 1192373 main.go:141] libmachine: (ha-185201) DBG | domain ha-185201 has defined MAC address 52:54:00:d6:52:92 in network mk-ha-185201
	I0414 12:47:55.729258 1192373 main.go:141] libmachine: (ha-185201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:52:92", ip: ""} in network mk-ha-185201: {Iface:virbr1 ExpiryTime:2025-04-14 13:41:46 +0000 UTC Type:0 Mac:52:54:00:d6:52:92 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-185201 Clientid:01:52:54:00:d6:52:92}
	I0414 12:47:55.729316 1192373 main.go:141] libmachine: (ha-185201) DBG | domain ha-185201 has defined IP address 192.168.39.82 and MAC address 52:54:00:d6:52:92 in network mk-ha-185201
	I0414 12:47:55.729669 1192373 host.go:66] Checking if "ha-185201" exists ...
	I0414 12:47:55.730029 1192373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:47:55.730090 1192373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:47:55.748140 1192373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46101
	I0414 12:47:55.748628 1192373 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:47:55.749148 1192373 main.go:141] libmachine: Using API Version  1
	I0414 12:47:55.749178 1192373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:47:55.749623 1192373 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:47:55.749852 1192373 main.go:141] libmachine: (ha-185201) Calling .DriverName
	I0414 12:47:55.750060 1192373 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 12:47:55.750110 1192373 main.go:141] libmachine: (ha-185201) Calling .GetSSHHostname
	I0414 12:47:55.753748 1192373 main.go:141] libmachine: (ha-185201) DBG | domain ha-185201 has defined MAC address 52:54:00:d6:52:92 in network mk-ha-185201
	I0414 12:47:55.754360 1192373 main.go:141] libmachine: (ha-185201) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:52:92", ip: ""} in network mk-ha-185201: {Iface:virbr1 ExpiryTime:2025-04-14 13:41:46 +0000 UTC Type:0 Mac:52:54:00:d6:52:92 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-185201 Clientid:01:52:54:00:d6:52:92}
	I0414 12:47:55.754399 1192373 main.go:141] libmachine: (ha-185201) DBG | domain ha-185201 has defined IP address 192.168.39.82 and MAC address 52:54:00:d6:52:92 in network mk-ha-185201
	I0414 12:47:55.754622 1192373 main.go:141] libmachine: (ha-185201) Calling .GetSSHPort
	I0414 12:47:55.754909 1192373 main.go:141] libmachine: (ha-185201) Calling .GetSSHKeyPath
	I0414 12:47:55.755158 1192373 main.go:141] libmachine: (ha-185201) Calling .GetSSHUsername
	I0414 12:47:55.755406 1192373 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/ha-185201/id_rsa Username:docker}
	I0414 12:47:55.846169 1192373 ssh_runner.go:195] Run: systemctl --version
	I0414 12:47:55.853450 1192373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:47:55.874402 1192373 kubeconfig.go:125] found "ha-185201" server: "https://192.168.39.254:8443"
	I0414 12:47:55.874461 1192373 api_server.go:166] Checking apiserver status ...
	I0414 12:47:55.874517 1192373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:47:55.889568 1192373 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0414 12:47:55.900153 1192373 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 12:47:55.900231 1192373 ssh_runner.go:195] Run: ls
	I0414 12:47:55.906125 1192373 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 12:47:55.911266 1192373 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 12:47:55.911307 1192373 status.go:463] ha-185201 apiserver status = Running (err=<nil>)
	I0414 12:47:55.911323 1192373 status.go:176] ha-185201 status: &{Name:ha-185201 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 12:47:55.911350 1192373 status.go:174] checking status of ha-185201-m02 ...
	I0414 12:47:55.912122 1192373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:47:55.912212 1192373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:47:55.930064 1192373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46071
	I0414 12:47:55.930689 1192373 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:47:55.931511 1192373 main.go:141] libmachine: Using API Version  1
	I0414 12:47:55.931570 1192373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:47:55.932054 1192373 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:47:55.932346 1192373 main.go:141] libmachine: (ha-185201-m02) Calling .GetState
	I0414 12:47:55.934392 1192373 status.go:371] ha-185201-m02 host status = "Stopped" (err=<nil>)
	I0414 12:47:55.934415 1192373 status.go:384] host is not running, skipping remaining checks
	I0414 12:47:55.934422 1192373 status.go:176] ha-185201-m02 status: &{Name:ha-185201-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 12:47:55.934440 1192373 status.go:174] checking status of ha-185201-m03 ...
	I0414 12:47:55.934790 1192373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:47:55.934836 1192373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:47:55.951735 1192373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0414 12:47:55.952271 1192373 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:47:55.952819 1192373 main.go:141] libmachine: Using API Version  1
	I0414 12:47:55.952841 1192373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:47:55.953282 1192373 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:47:55.953617 1192373 main.go:141] libmachine: (ha-185201-m03) Calling .GetState
	I0414 12:47:55.955460 1192373 status.go:371] ha-185201-m03 host status = "Running" (err=<nil>)
	I0414 12:47:55.955478 1192373 host.go:66] Checking if "ha-185201-m03" exists ...
	I0414 12:47:55.955866 1192373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:47:55.955909 1192373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:47:55.972928 1192373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38317
	I0414 12:47:55.973535 1192373 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:47:55.974112 1192373 main.go:141] libmachine: Using API Version  1
	I0414 12:47:55.974145 1192373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:47:55.974537 1192373 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:47:55.974731 1192373 main.go:141] libmachine: (ha-185201-m03) Calling .GetIP
	I0414 12:47:55.978326 1192373 main.go:141] libmachine: (ha-185201-m03) DBG | domain ha-185201-m03 has defined MAC address 52:54:00:a7:5e:9f in network mk-ha-185201
	I0414 12:47:55.978844 1192373 main.go:141] libmachine: (ha-185201-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:5e:9f", ip: ""} in network mk-ha-185201: {Iface:virbr1 ExpiryTime:2025-04-14 13:43:56 +0000 UTC Type:0 Mac:52:54:00:a7:5e:9f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-185201-m03 Clientid:01:52:54:00:a7:5e:9f}
	I0414 12:47:55.978885 1192373 main.go:141] libmachine: (ha-185201-m03) DBG | domain ha-185201-m03 has defined IP address 192.168.39.224 and MAC address 52:54:00:a7:5e:9f in network mk-ha-185201
	I0414 12:47:55.979034 1192373 host.go:66] Checking if "ha-185201-m03" exists ...
	I0414 12:47:55.979473 1192373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:47:55.979537 1192373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:47:55.998846 1192373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0414 12:47:55.999351 1192373 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:47:55.999852 1192373 main.go:141] libmachine: Using API Version  1
	I0414 12:47:55.999875 1192373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:47:56.000305 1192373 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:47:56.000564 1192373 main.go:141] libmachine: (ha-185201-m03) Calling .DriverName
	I0414 12:47:56.000813 1192373 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 12:47:56.000840 1192373 main.go:141] libmachine: (ha-185201-m03) Calling .GetSSHHostname
	I0414 12:47:56.004267 1192373 main.go:141] libmachine: (ha-185201-m03) DBG | domain ha-185201-m03 has defined MAC address 52:54:00:a7:5e:9f in network mk-ha-185201
	I0414 12:47:56.004780 1192373 main.go:141] libmachine: (ha-185201-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:5e:9f", ip: ""} in network mk-ha-185201: {Iface:virbr1 ExpiryTime:2025-04-14 13:43:56 +0000 UTC Type:0 Mac:52:54:00:a7:5e:9f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-185201-m03 Clientid:01:52:54:00:a7:5e:9f}
	I0414 12:47:56.004819 1192373 main.go:141] libmachine: (ha-185201-m03) DBG | domain ha-185201-m03 has defined IP address 192.168.39.224 and MAC address 52:54:00:a7:5e:9f in network mk-ha-185201
	I0414 12:47:56.005058 1192373 main.go:141] libmachine: (ha-185201-m03) Calling .GetSSHPort
	I0414 12:47:56.005327 1192373 main.go:141] libmachine: (ha-185201-m03) Calling .GetSSHKeyPath
	I0414 12:47:56.005560 1192373 main.go:141] libmachine: (ha-185201-m03) Calling .GetSSHUsername
	I0414 12:47:56.005722 1192373 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/ha-185201-m03/id_rsa Username:docker}
	I0414 12:47:56.089005 1192373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:47:56.105539 1192373 kubeconfig.go:125] found "ha-185201" server: "https://192.168.39.254:8443"
	I0414 12:47:56.105585 1192373 api_server.go:166] Checking apiserver status ...
	I0414 12:47:56.105628 1192373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 12:47:56.122418 1192373 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1472/cgroup
	W0414 12:47:56.133925 1192373 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1472/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 12:47:56.133997 1192373 ssh_runner.go:195] Run: ls
	I0414 12:47:56.139293 1192373 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 12:47:56.144045 1192373 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 12:47:56.144078 1192373 status.go:463] ha-185201-m03 apiserver status = Running (err=<nil>)
	I0414 12:47:56.144088 1192373 status.go:176] ha-185201-m03 status: &{Name:ha-185201-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 12:47:56.144106 1192373 status.go:174] checking status of ha-185201-m04 ...
	I0414 12:47:56.144425 1192373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:47:56.144471 1192373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:47:56.161204 1192373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44405
	I0414 12:47:56.161695 1192373 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:47:56.162187 1192373 main.go:141] libmachine: Using API Version  1
	I0414 12:47:56.162213 1192373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:47:56.162654 1192373 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:47:56.162859 1192373 main.go:141] libmachine: (ha-185201-m04) Calling .GetState
	I0414 12:47:56.164617 1192373 status.go:371] ha-185201-m04 host status = "Running" (err=<nil>)
	I0414 12:47:56.164639 1192373 host.go:66] Checking if "ha-185201-m04" exists ...
	I0414 12:47:56.164956 1192373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:47:56.165008 1192373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:47:56.181029 1192373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35613
	I0414 12:47:56.181515 1192373 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:47:56.182091 1192373 main.go:141] libmachine: Using API Version  1
	I0414 12:47:56.182122 1192373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:47:56.182605 1192373 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:47:56.182823 1192373 main.go:141] libmachine: (ha-185201-m04) Calling .GetIP
	I0414 12:47:56.186514 1192373 main.go:141] libmachine: (ha-185201-m04) DBG | domain ha-185201-m04 has defined MAC address 52:54:00:08:1f:6b in network mk-ha-185201
	I0414 12:47:56.187108 1192373 main.go:141] libmachine: (ha-185201-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:1f:6b", ip: ""} in network mk-ha-185201: {Iface:virbr1 ExpiryTime:2025-04-14 13:45:25 +0000 UTC Type:0 Mac:52:54:00:08:1f:6b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-185201-m04 Clientid:01:52:54:00:08:1f:6b}
	I0414 12:47:56.187141 1192373 main.go:141] libmachine: (ha-185201-m04) DBG | domain ha-185201-m04 has defined IP address 192.168.39.172 and MAC address 52:54:00:08:1f:6b in network mk-ha-185201
	I0414 12:47:56.187422 1192373 host.go:66] Checking if "ha-185201-m04" exists ...
	I0414 12:47:56.187783 1192373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 12:47:56.187835 1192373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 12:47:56.205404 1192373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39581
	I0414 12:47:56.206085 1192373 main.go:141] libmachine: () Calling .GetVersion
	I0414 12:47:56.207006 1192373 main.go:141] libmachine: Using API Version  1
	I0414 12:47:56.207059 1192373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 12:47:56.207505 1192373 main.go:141] libmachine: () Calling .GetMachineName
	I0414 12:47:56.207804 1192373 main.go:141] libmachine: (ha-185201-m04) Calling .DriverName
	I0414 12:47:56.208074 1192373 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 12:47:56.208121 1192373 main.go:141] libmachine: (ha-185201-m04) Calling .GetSSHHostname
	I0414 12:47:56.211871 1192373 main.go:141] libmachine: (ha-185201-m04) DBG | domain ha-185201-m04 has defined MAC address 52:54:00:08:1f:6b in network mk-ha-185201
	I0414 12:47:56.212407 1192373 main.go:141] libmachine: (ha-185201-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:1f:6b", ip: ""} in network mk-ha-185201: {Iface:virbr1 ExpiryTime:2025-04-14 13:45:25 +0000 UTC Type:0 Mac:52:54:00:08:1f:6b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-185201-m04 Clientid:01:52:54:00:08:1f:6b}
	I0414 12:47:56.212437 1192373 main.go:141] libmachine: (ha-185201-m04) DBG | domain ha-185201-m04 has defined IP address 192.168.39.172 and MAC address 52:54:00:08:1f:6b in network mk-ha-185201
	I0414 12:47:56.212757 1192373 main.go:141] libmachine: (ha-185201-m04) Calling .GetSSHPort
	I0414 12:47:56.213041 1192373 main.go:141] libmachine: (ha-185201-m04) Calling .GetSSHKeyPath
	I0414 12:47:56.213264 1192373 main.go:141] libmachine: (ha-185201-m04) Calling .GetSSHUsername
	I0414 12:47:56.213438 1192373 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/ha-185201-m04/id_rsa Username:docker}
	I0414 12:47:56.303836 1192373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 12:47:56.319127 1192373 status.go:176] ha-185201-m04 status: &{Name:ha-185201-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-185201 node start m02 -v=7 --alsologtostderr: (48.493017054s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (49.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (449.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-185201 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-185201 -v=7 --alsologtostderr
E0414 12:49:57.146338 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:50:24.851333 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:51:49.052686 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-185201 -v=7 --alsologtostderr: (4m34.53423125s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-185201 --wait=true -v=7 --alsologtostderr
E0414 12:54:57.146094 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-185201 --wait=true -v=7 --alsologtostderr: (2m55.119480672s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-185201
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (449.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-185201 node delete m03 -v=7 --alsologtostderr: (17.872268229s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (273.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 stop -v=7 --alsologtostderr
E0414 12:56:49.053392 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:59:52.130303 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 12:59:57.146180 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-185201 stop -v=7 --alsologtostderr: (4m33.006284538s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-185201 status -v=7 --alsologtostderr: exit status 7 (136.843594ms)

                                                
                                                
-- stdout --
	ha-185201
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-185201-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-185201-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:01:09.698071 1196604 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:01:09.698200 1196604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:01:09.698205 1196604 out.go:358] Setting ErrFile to fd 2...
	I0414 13:01:09.698210 1196604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:01:09.698458 1196604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:01:09.698702 1196604 out.go:352] Setting JSON to false
	I0414 13:01:09.698745 1196604 mustload.go:65] Loading cluster: ha-185201
	I0414 13:01:09.698841 1196604 notify.go:220] Checking for updates...
	I0414 13:01:09.699414 1196604 config.go:182] Loaded profile config "ha-185201": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:01:09.699457 1196604 status.go:174] checking status of ha-185201 ...
	I0414 13:01:09.700056 1196604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:01:09.700122 1196604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:01:09.728311 1196604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I0414 13:01:09.729045 1196604 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:01:09.729740 1196604 main.go:141] libmachine: Using API Version  1
	I0414 13:01:09.729770 1196604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:01:09.730327 1196604 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:01:09.730633 1196604 main.go:141] libmachine: (ha-185201) Calling .GetState
	I0414 13:01:09.733607 1196604 status.go:371] ha-185201 host status = "Stopped" (err=<nil>)
	I0414 13:01:09.733644 1196604 status.go:384] host is not running, skipping remaining checks
	I0414 13:01:09.733653 1196604 status.go:176] ha-185201 status: &{Name:ha-185201 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:01:09.733715 1196604 status.go:174] checking status of ha-185201-m02 ...
	I0414 13:01:09.734232 1196604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:01:09.734302 1196604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:01:09.752310 1196604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40833
	I0414 13:01:09.752975 1196604 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:01:09.753600 1196604 main.go:141] libmachine: Using API Version  1
	I0414 13:01:09.753637 1196604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:01:09.754046 1196604 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:01:09.754282 1196604 main.go:141] libmachine: (ha-185201-m02) Calling .GetState
	I0414 13:01:09.756543 1196604 status.go:371] ha-185201-m02 host status = "Stopped" (err=<nil>)
	I0414 13:01:09.756570 1196604 status.go:384] host is not running, skipping remaining checks
	I0414 13:01:09.756578 1196604 status.go:176] ha-185201-m02 status: &{Name:ha-185201-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:01:09.756602 1196604 status.go:174] checking status of ha-185201-m04 ...
	I0414 13:01:09.756985 1196604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:01:09.757047 1196604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:01:09.774810 1196604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0414 13:01:09.775311 1196604 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:01:09.775872 1196604 main.go:141] libmachine: Using API Version  1
	I0414 13:01:09.775904 1196604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:01:09.776396 1196604 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:01:09.776623 1196604 main.go:141] libmachine: (ha-185201-m04) Calling .GetState
	I0414 13:01:09.778508 1196604 status.go:371] ha-185201-m04 host status = "Stopped" (err=<nil>)
	I0414 13:01:09.778531 1196604 status.go:384] host is not running, skipping remaining checks
	I0414 13:01:09.778537 1196604 status.go:176] ha-185201-m04 status: &{Name:ha-185201-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (273.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (166.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-185201 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 13:01:20.213184 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:01:49.053039 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-185201 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m45.488784234s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (166.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-185201 --control-plane -v=7 --alsologtostderr
E0414 13:04:57.146323 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-185201 --control-plane -v=7 --alsologtostderr: (1m18.875934391s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-185201 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-703972 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-703972 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.899296929s)
--- PASS: TestJSONOutput/start/Command (58.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-703972 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-703972 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.41s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-703972 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-703972 --output=json --user=testUser: (7.410290694s)
--- PASS: TestJSONOutput/stop/Command (7.41s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-501912 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-501912 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.256167ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64df0a83-f2cd-493c-bc35-aa2ff850665d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-501912] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d044897-1050-4371-ab28-9ead2f049d31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20384"}}
	{"specversion":"1.0","id":"2ed6d47a-0a87-40c0-8b08-d7f2af87a5e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f5edaa78-6805-4706-96b9-32e4dc860025","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig"}}
	{"specversion":"1.0","id":"7596cf57-04dd-4e48-9f96-11a538b6eba6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube"}}
	{"specversion":"1.0","id":"58a4e78d-2278-4e52-b22d-4796223a3cda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a989949d-1caf-48af-8f6f-6ec9ba1b5e5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"70f28365-8b47-43a1-9143-35e48c5ea230","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-501912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-501912
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (95.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-932550 --driver=kvm2  --container-runtime=crio
E0414 13:06:49.053676 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-932550 --driver=kvm2  --container-runtime=crio: (47.800647017s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-946614 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-946614 --driver=kvm2  --container-runtime=crio: (44.380755112s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-932550
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-946614
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-946614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-946614
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-946614: (1.126317481s)
helpers_test.go:175: Cleaning up "first-932550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-932550
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-932550: (1.063584729s)
--- PASS: TestMinikubeProfile (95.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-951426 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-951426 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.545636345s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-951426 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-951426 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-970734 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-970734 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.784225259s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970734 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970734 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.96s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-951426 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970734 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970734 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-970734
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-970734: (1.339697002s)
--- PASS: TestMountStart/serial/Stop (1.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-970734
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-970734: (22.675061866s)
--- PASS: TestMountStart/serial/RestartStopped (23.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970734 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-970734 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (117.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-008349 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 13:09:57.148296 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-008349 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m57.286231086s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (117.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-008349 -- rollout status deployment/busybox: (5.962483052s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- exec busybox-58667487b6-qtmt4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- exec busybox-58667487b6-r52pl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- exec busybox-58667487b6-qtmt4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- exec busybox-58667487b6-r52pl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- exec busybox-58667487b6-qtmt4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- exec busybox-58667487b6-r52pl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.73s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- exec busybox-58667487b6-qtmt4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- exec busybox-58667487b6-qtmt4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- exec busybox-58667487b6-r52pl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-008349 -- exec busybox-58667487b6-r52pl -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-008349 -v 3 --alsologtostderr
E0414 13:11:49.053578 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-008349 -v 3 --alsologtostderr: (52.257822071s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.89s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-008349 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp testdata/cp-test.txt multinode-008349:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp multinode-008349:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4106620807/001/cp-test_multinode-008349.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp multinode-008349:/home/docker/cp-test.txt multinode-008349-m02:/home/docker/cp-test_multinode-008349_multinode-008349-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m02 "sudo cat /home/docker/cp-test_multinode-008349_multinode-008349-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp multinode-008349:/home/docker/cp-test.txt multinode-008349-m03:/home/docker/cp-test_multinode-008349_multinode-008349-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m03 "sudo cat /home/docker/cp-test_multinode-008349_multinode-008349-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp testdata/cp-test.txt multinode-008349-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp multinode-008349-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4106620807/001/cp-test_multinode-008349-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp multinode-008349-m02:/home/docker/cp-test.txt multinode-008349:/home/docker/cp-test_multinode-008349-m02_multinode-008349.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349 "sudo cat /home/docker/cp-test_multinode-008349-m02_multinode-008349.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp multinode-008349-m02:/home/docker/cp-test.txt multinode-008349-m03:/home/docker/cp-test_multinode-008349-m02_multinode-008349-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m03 "sudo cat /home/docker/cp-test_multinode-008349-m02_multinode-008349-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp testdata/cp-test.txt multinode-008349-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp multinode-008349-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4106620807/001/cp-test_multinode-008349-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp multinode-008349-m03:/home/docker/cp-test.txt multinode-008349:/home/docker/cp-test_multinode-008349-m03_multinode-008349.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349 "sudo cat /home/docker/cp-test_multinode-008349-m03_multinode-008349.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 cp multinode-008349-m03:/home/docker/cp-test.txt multinode-008349-m02:/home/docker/cp-test_multinode-008349-m03_multinode-008349-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 ssh -n multinode-008349-m02 "sudo cat /home/docker/cp-test_multinode-008349-m03_multinode-008349-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-008349 node stop m03: (1.511643056s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-008349 status: exit status 7 (469.299942ms)

                                                
                                                
-- stdout --
	multinode-008349
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-008349-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-008349-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-008349 status --alsologtostderr: exit status 7 (478.218196ms)

                                                
                                                
-- stdout --
	multinode-008349
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-008349-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-008349-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:12:43.875424 1204631 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:12:43.875705 1204631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:12:43.875717 1204631 out.go:358] Setting ErrFile to fd 2...
	I0414 13:12:43.875722 1204631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:12:43.875948 1204631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:12:43.876172 1204631 out.go:352] Setting JSON to false
	I0414 13:12:43.876213 1204631 mustload.go:65] Loading cluster: multinode-008349
	I0414 13:12:43.876345 1204631 notify.go:220] Checking for updates...
	I0414 13:12:43.876649 1204631 config.go:182] Loaded profile config "multinode-008349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:12:43.876676 1204631 status.go:174] checking status of multinode-008349 ...
	I0414 13:12:43.877088 1204631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:43.877148 1204631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:43.898048 1204631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42765
	I0414 13:12:43.898836 1204631 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:43.899574 1204631 main.go:141] libmachine: Using API Version  1
	I0414 13:12:43.899611 1204631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:43.900193 1204631 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:43.900464 1204631 main.go:141] libmachine: (multinode-008349) Calling .GetState
	I0414 13:12:43.902781 1204631 status.go:371] multinode-008349 host status = "Running" (err=<nil>)
	I0414 13:12:43.902809 1204631 host.go:66] Checking if "multinode-008349" exists ...
	I0414 13:12:43.903228 1204631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:43.903317 1204631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:43.921593 1204631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45123
	I0414 13:12:43.922164 1204631 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:43.922773 1204631 main.go:141] libmachine: Using API Version  1
	I0414 13:12:43.922798 1204631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:43.923227 1204631 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:43.923447 1204631 main.go:141] libmachine: (multinode-008349) Calling .GetIP
	I0414 13:12:43.927126 1204631 main.go:141] libmachine: (multinode-008349) DBG | domain multinode-008349 has defined MAC address 52:54:00:38:40:d2 in network mk-multinode-008349
	I0414 13:12:43.927602 1204631 main.go:141] libmachine: (multinode-008349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:40:d2", ip: ""} in network mk-multinode-008349: {Iface:virbr1 ExpiryTime:2025-04-14 14:09:49 +0000 UTC Type:0 Mac:52:54:00:38:40:d2 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:multinode-008349 Clientid:01:52:54:00:38:40:d2}
	I0414 13:12:43.927643 1204631 main.go:141] libmachine: (multinode-008349) DBG | domain multinode-008349 has defined IP address 192.168.39.110 and MAC address 52:54:00:38:40:d2 in network mk-multinode-008349
	I0414 13:12:43.927813 1204631 host.go:66] Checking if "multinode-008349" exists ...
	I0414 13:12:43.928314 1204631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:43.928377 1204631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:43.946025 1204631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0414 13:12:43.946498 1204631 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:43.946962 1204631 main.go:141] libmachine: Using API Version  1
	I0414 13:12:43.946989 1204631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:43.947392 1204631 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:43.947611 1204631 main.go:141] libmachine: (multinode-008349) Calling .DriverName
	I0414 13:12:43.947823 1204631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 13:12:43.947849 1204631 main.go:141] libmachine: (multinode-008349) Calling .GetSSHHostname
	I0414 13:12:43.951936 1204631 main.go:141] libmachine: (multinode-008349) DBG | domain multinode-008349 has defined MAC address 52:54:00:38:40:d2 in network mk-multinode-008349
	I0414 13:12:43.952554 1204631 main.go:141] libmachine: (multinode-008349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:40:d2", ip: ""} in network mk-multinode-008349: {Iface:virbr1 ExpiryTime:2025-04-14 14:09:49 +0000 UTC Type:0 Mac:52:54:00:38:40:d2 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:multinode-008349 Clientid:01:52:54:00:38:40:d2}
	I0414 13:12:43.952587 1204631 main.go:141] libmachine: (multinode-008349) DBG | domain multinode-008349 has defined IP address 192.168.39.110 and MAC address 52:54:00:38:40:d2 in network mk-multinode-008349
	I0414 13:12:43.952849 1204631 main.go:141] libmachine: (multinode-008349) Calling .GetSSHPort
	I0414 13:12:43.953187 1204631 main.go:141] libmachine: (multinode-008349) Calling .GetSSHKeyPath
	I0414 13:12:43.953460 1204631 main.go:141] libmachine: (multinode-008349) Calling .GetSSHUsername
	I0414 13:12:43.953665 1204631 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/multinode-008349/id_rsa Username:docker}
	I0414 13:12:44.039789 1204631 ssh_runner.go:195] Run: systemctl --version
	I0414 13:12:44.046759 1204631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:12:44.063319 1204631 kubeconfig.go:125] found "multinode-008349" server: "https://192.168.39.110:8443"
	I0414 13:12:44.063365 1204631 api_server.go:166] Checking apiserver status ...
	I0414 13:12:44.063405 1204631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 13:12:44.080228 1204631 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0414 13:12:44.092308 1204631 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 13:12:44.092405 1204631 ssh_runner.go:195] Run: ls
	I0414 13:12:44.098587 1204631 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I0414 13:12:44.103220 1204631 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I0414 13:12:44.103257 1204631 status.go:463] multinode-008349 apiserver status = Running (err=<nil>)
	I0414 13:12:44.103269 1204631 status.go:176] multinode-008349 status: &{Name:multinode-008349 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:12:44.103291 1204631 status.go:174] checking status of multinode-008349-m02 ...
	I0414 13:12:44.103609 1204631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.103679 1204631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.121213 1204631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36025
	I0414 13:12:44.121811 1204631 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.122508 1204631 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.122535 1204631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.123053 1204631 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.123291 1204631 main.go:141] libmachine: (multinode-008349-m02) Calling .GetState
	I0414 13:12:44.125284 1204631 status.go:371] multinode-008349-m02 host status = "Running" (err=<nil>)
	I0414 13:12:44.125310 1204631 host.go:66] Checking if "multinode-008349-m02" exists ...
	I0414 13:12:44.125619 1204631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.125669 1204631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.142974 1204631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0414 13:12:44.143594 1204631 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.144055 1204631 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.144080 1204631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.144466 1204631 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.144667 1204631 main.go:141] libmachine: (multinode-008349-m02) Calling .GetIP
	I0414 13:12:44.148300 1204631 main.go:141] libmachine: (multinode-008349-m02) DBG | domain multinode-008349-m02 has defined MAC address 52:54:00:fc:dd:5f in network mk-multinode-008349
	I0414 13:12:44.148827 1204631 main.go:141] libmachine: (multinode-008349-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dd:5f", ip: ""} in network mk-multinode-008349: {Iface:virbr1 ExpiryTime:2025-04-14 14:10:55 +0000 UTC Type:0 Mac:52:54:00:fc:dd:5f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-008349-m02 Clientid:01:52:54:00:fc:dd:5f}
	I0414 13:12:44.148858 1204631 main.go:141] libmachine: (multinode-008349-m02) DBG | domain multinode-008349-m02 has defined IP address 192.168.39.50 and MAC address 52:54:00:fc:dd:5f in network mk-multinode-008349
	I0414 13:12:44.149045 1204631 host.go:66] Checking if "multinode-008349-m02" exists ...
	I0414 13:12:44.149353 1204631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.149403 1204631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.165758 1204631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0414 13:12:44.166351 1204631 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.166958 1204631 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.166984 1204631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.167388 1204631 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.167643 1204631 main.go:141] libmachine: (multinode-008349-m02) Calling .DriverName
	I0414 13:12:44.168000 1204631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 13:12:44.168032 1204631 main.go:141] libmachine: (multinode-008349-m02) Calling .GetSSHHostname
	I0414 13:12:44.172610 1204631 main.go:141] libmachine: (multinode-008349-m02) DBG | domain multinode-008349-m02 has defined MAC address 52:54:00:fc:dd:5f in network mk-multinode-008349
	I0414 13:12:44.173217 1204631 main.go:141] libmachine: (multinode-008349-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dd:5f", ip: ""} in network mk-multinode-008349: {Iface:virbr1 ExpiryTime:2025-04-14 14:10:55 +0000 UTC Type:0 Mac:52:54:00:fc:dd:5f Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-008349-m02 Clientid:01:52:54:00:fc:dd:5f}
	I0414 13:12:44.173247 1204631 main.go:141] libmachine: (multinode-008349-m02) DBG | domain multinode-008349-m02 has defined IP address 192.168.39.50 and MAC address 52:54:00:fc:dd:5f in network mk-multinode-008349
	I0414 13:12:44.173480 1204631 main.go:141] libmachine: (multinode-008349-m02) Calling .GetSSHPort
	I0414 13:12:44.173736 1204631 main.go:141] libmachine: (multinode-008349-m02) Calling .GetSSHKeyPath
	I0414 13:12:44.173886 1204631 main.go:141] libmachine: (multinode-008349-m02) Calling .GetSSHUsername
	I0414 13:12:44.174013 1204631 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20384-1167927/.minikube/machines/multinode-008349-m02/id_rsa Username:docker}
	I0414 13:12:44.260621 1204631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 13:12:44.276556 1204631 status.go:176] multinode-008349-m02 status: &{Name:multinode-008349-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:12:44.276612 1204631 status.go:174] checking status of multinode-008349-m03 ...
	I0414 13:12:44.276978 1204631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:12:44.277035 1204631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:12:44.294719 1204631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42497
	I0414 13:12:44.295342 1204631 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:12:44.295967 1204631 main.go:141] libmachine: Using API Version  1
	I0414 13:12:44.295993 1204631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:12:44.296411 1204631 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:12:44.296647 1204631 main.go:141] libmachine: (multinode-008349-m03) Calling .GetState
	I0414 13:12:44.298642 1204631 status.go:371] multinode-008349-m03 host status = "Stopped" (err=<nil>)
	I0414 13:12:44.298665 1204631 status.go:384] host is not running, skipping remaining checks
	I0414 13:12:44.298673 1204631 status.go:176] multinode-008349-m03 status: &{Name:multinode-008349-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-008349 node start m03 -v=7 --alsologtostderr: (38.418697859s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (343.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-008349
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-008349
E0414 13:14:57.146442 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-008349: (3m3.526671072s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-008349 --wait=true -v=8 --alsologtostderr
E0414 13:16:32.132489 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:16:49.053103 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:18:00.214703 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-008349 --wait=true -v=8 --alsologtostderr: (2m39.656680159s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-008349
--- PASS: TestMultiNode/serial/RestartKeepsNodes (343.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-008349 node delete m03: (2.366520859s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 stop
E0414 13:19:57.146382 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:21:49.054312 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-008349 stop: (3m2.056490322s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-008349 status: exit status 7 (106.680569ms)

                                                
                                                
-- stdout --
	multinode-008349
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-008349-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-008349 status --alsologtostderr: exit status 7 (102.737062ms)

                                                
                                                
-- stdout --
	multinode-008349
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-008349-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:22:11.935742 1208103 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:22:11.936006 1208103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:22:11.936018 1208103 out.go:358] Setting ErrFile to fd 2...
	I0414 13:22:11.936022 1208103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:22:11.936225 1208103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:22:11.936405 1208103 out.go:352] Setting JSON to false
	I0414 13:22:11.936440 1208103 mustload.go:65] Loading cluster: multinode-008349
	I0414 13:22:11.936586 1208103 notify.go:220] Checking for updates...
	I0414 13:22:11.936815 1208103 config.go:182] Loaded profile config "multinode-008349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:22:11.936840 1208103 status.go:174] checking status of multinode-008349 ...
	I0414 13:22:11.937319 1208103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:22:11.937365 1208103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:22:11.956636 1208103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37099
	I0414 13:22:11.957314 1208103 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:22:11.958004 1208103 main.go:141] libmachine: Using API Version  1
	I0414 13:22:11.958043 1208103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:22:11.958634 1208103 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:22:11.958942 1208103 main.go:141] libmachine: (multinode-008349) Calling .GetState
	I0414 13:22:11.961078 1208103 status.go:371] multinode-008349 host status = "Stopped" (err=<nil>)
	I0414 13:22:11.961101 1208103 status.go:384] host is not running, skipping remaining checks
	I0414 13:22:11.961108 1208103 status.go:176] multinode-008349 status: &{Name:multinode-008349 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 13:22:11.961154 1208103 status.go:174] checking status of multinode-008349-m02 ...
	I0414 13:22:11.961530 1208103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 13:22:11.961591 1208103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 13:22:11.979133 1208103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0414 13:22:11.979789 1208103 main.go:141] libmachine: () Calling .GetVersion
	I0414 13:22:11.980505 1208103 main.go:141] libmachine: Using API Version  1
	I0414 13:22:11.980552 1208103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 13:22:11.981007 1208103 main.go:141] libmachine: () Calling .GetMachineName
	I0414 13:22:11.981244 1208103 main.go:141] libmachine: (multinode-008349-m02) Calling .GetState
	I0414 13:22:11.983508 1208103 status.go:371] multinode-008349-m02 host status = "Stopped" (err=<nil>)
	I0414 13:22:11.983535 1208103 status.go:384] host is not running, skipping remaining checks
	I0414 13:22:11.983541 1208103 status.go:176] multinode-008349-m02 status: &{Name:multinode-008349-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (116.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-008349 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-008349 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.482839851s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-008349 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (116.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-008349
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-008349-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-008349-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (82.512734ms)

                                                
                                                
-- stdout --
	* [multinode-008349-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-008349-m02' is duplicated with machine name 'multinode-008349-m02' in profile 'multinode-008349'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-008349-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-008349-m03 --driver=kvm2  --container-runtime=crio: (44.549451397s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-008349
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-008349: exit status 80 (290.094557ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-008349 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-008349-m03 already exists in multinode-008349-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-008349-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.84s)

                                                
                                    
x
+
TestScheduledStopUnix (116.25s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-920351 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-920351 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.328138827s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-920351 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-920351 -n scheduled-stop-920351
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-920351 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0414 13:28:35.296201 1175746 retry.go:31] will retry after 109.523µs: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.297420 1175746 retry.go:31] will retry after 221.091µs: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.298563 1175746 retry.go:31] will retry after 294.292µs: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.299771 1175746 retry.go:31] will retry after 198.212µs: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.300953 1175746 retry.go:31] will retry after 371.038µs: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.302137 1175746 retry.go:31] will retry after 985.276µs: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.303334 1175746 retry.go:31] will retry after 1.121288ms: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.305596 1175746 retry.go:31] will retry after 1.351058ms: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.307914 1175746 retry.go:31] will retry after 3.445006ms: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.312212 1175746 retry.go:31] will retry after 4.683344ms: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.317620 1175746 retry.go:31] will retry after 5.766243ms: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.323926 1175746 retry.go:31] will retry after 4.905273ms: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.329262 1175746 retry.go:31] will retry after 19.305098ms: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.349623 1175746 retry.go:31] will retry after 15.134722ms: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
I0414 13:28:35.365900 1175746 retry.go:31] will retry after 40.537116ms: open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/scheduled-stop-920351/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-920351 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-920351 -n scheduled-stop-920351
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-920351
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-920351 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-920351
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-920351: exit status 7 (80.320574ms)

                                                
                                                
-- stdout --
	scheduled-stop-920351
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-920351 -n scheduled-stop-920351
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-920351 -n scheduled-stop-920351: exit status 7 (78.969707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-920351" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-920351
--- PASS: TestScheduledStopUnix (116.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (221.37s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4258259714 start -p running-upgrade-865678 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0414 13:29:57.146615 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4258259714 start -p running-upgrade-865678 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.511791288s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-865678 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-865678 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.552941678s)
helpers_test.go:175: Cleaning up "running-upgrade-865678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-865678
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-865678: (1.609245221s)
--- PASS: TestRunningBinaryUpgrade (221.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-814220 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-814220 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (92.825462ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-814220] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-814220 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-814220 --driver=kvm2  --container-runtime=crio: (1m33.570922442s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-814220 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (157.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3737238826 start -p stopped-upgrade-415876 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3737238826 start -p stopped-upgrade-415876 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m52.875988435s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3737238826 -p stopped-upgrade-415876 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3737238826 -p stopped-upgrade-415876 stop: (2.145463889s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-415876 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-415876 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.908878707s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (157.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (64.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-814220 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0414 13:31:49.052864 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-814220 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m3.106635853s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-814220 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-814220 status -o json: exit status 2 (327.846859ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-814220","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-814220
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-814220: (1.388015002s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (64.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-814220 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-814220 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.307757676s)
--- PASS: TestNoKubernetes/serial/Start (28.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-814220 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-814220 "sudo systemctl is-active --quiet service kubelet": exit status 1 (227.178401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.927236406s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0414 13:33:12.133920 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.69385212s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-814220
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-814220: (1.330880051s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-814220 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-814220 --driver=kvm2  --container-runtime=crio: (23.074036321s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-415876
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-814220 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-814220 "sudo systemctl is-active --quiet service kubelet": exit status 1 (229.971616ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-734713 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-734713 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (123.723007ms)

                                                
                                                
-- stdout --
	* [false-734713] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 13:33:54.912466 1216151 out.go:345] Setting OutFile to fd 1 ...
	I0414 13:33:54.912606 1216151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:33:54.912615 1216151 out.go:358] Setting ErrFile to fd 2...
	I0414 13:33:54.912619 1216151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 13:33:54.912832 1216151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-1167927/.minikube/bin
	I0414 13:33:54.913510 1216151 out.go:352] Setting JSON to false
	I0414 13:33:54.914679 1216151 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18982,"bootTime":1744618653,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 13:33:54.914829 1216151 start.go:139] virtualization: kvm guest
	I0414 13:33:54.917480 1216151 out.go:177] * [false-734713] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 13:33:54.919489 1216151 out.go:177]   - MINIKUBE_LOCATION=20384
	I0414 13:33:54.919489 1216151 notify.go:220] Checking for updates...
	I0414 13:33:54.921342 1216151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 13:33:54.923433 1216151 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20384-1167927/kubeconfig
	I0414 13:33:54.925021 1216151 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-1167927/.minikube
	I0414 13:33:54.926460 1216151 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 13:33:54.928426 1216151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 13:33:54.930635 1216151 config.go:182] Loaded profile config "cert-expiration-737652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:33:54.930781 1216151 config.go:182] Loaded profile config "force-systemd-flag-902605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 13:33:54.930894 1216151 config.go:182] Loaded profile config "kubernetes-upgrade-225418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 13:33:54.931024 1216151 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 13:33:54.973542 1216151 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 13:33:54.975510 1216151 start.go:297] selected driver: kvm2
	I0414 13:33:54.975540 1216151 start.go:901] validating driver "kvm2" against <nil>
	I0414 13:33:54.975558 1216151 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 13:33:54.978403 1216151 out.go:201] 
	W0414 13:33:54.980268 1216151 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0414 13:33:54.981984 1216151 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-734713 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-734713" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-734713" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-734713

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-734713"

                                                
                                                
----------------------- debugLogs end: false-734713 [took: 3.415954992s] --------------------------------
helpers_test.go:175: Cleaning up "false-734713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-734713
--- PASS: TestNetworkPlugins/group/false (3.70s)

                                                
                                    
x
+
TestPause/serial/Start (103.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-527439 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-527439 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m43.137063548s)
--- PASS: TestPause/serial/Start (103.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-527439 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-527439 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.887507612s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (56.92s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-527439 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-527439 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-527439 --output=json --layout=cluster: exit status 2 (332.698828ms)

                                                
                                                
-- stdout --
	{"Name":"pause-527439","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-527439","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-527439 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-527439 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-527439 --alsologtostderr -v=5: (1.049701158s)
--- PASS: TestPause/serial/PauseAgain (1.05s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.24s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-527439 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-527439 --alsologtostderr -v=5: (1.243534709s)
--- PASS: TestPause/serial/DeletePaused (1.24s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.86s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-824763 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 13:36:49.053218 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-824763 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m30.117840769s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-181856 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-181856 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m1.539611721s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-824763 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fbe56745-3fc4-4a4b-ab98-ffb272586b12] Pending
helpers_test.go:344: "busybox" [fbe56745-3fc4-4a4b-ab98-ffb272586b12] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.003635444s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-824763 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-181856 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7a6efa12-3060-4fa2-8fa0-e055cabf9992] Pending
helpers_test.go:344: "busybox" [7a6efa12-3060-4fa2-8fa0-e055cabf9992] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7a6efa12-3060-4fa2-8fa0-e055cabf9992] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00482135s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-181856 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-824763 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-824763 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.024935646s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-824763 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-824763 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-824763 --alsologtostderr -v=3: (1m30.927098803s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-160581 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-160581 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m2.273691165s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-181856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-181856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.011302523s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-181856 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-181856 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-181856 --alsologtostderr -v=3: (1m31.222264444s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-160581 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e121bd80-a9c3-4a5f-94d9-2af01faafbac] Pending
helpers_test.go:344: "busybox" [e121bd80-a9c3-4a5f-94d9-2af01faafbac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e121bd80-a9c3-4a5f-94d9-2af01faafbac] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004912083s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-160581 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-160581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-160581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.04587238s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-160581 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-160581 --alsologtostderr -v=3
E0414 13:39:57.146270 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/functional-760045/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-160581 --alsologtostderr -v=3: (1m31.375924439s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824763 -n no-preload-824763
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824763 -n no-preload-824763: exit status 7 (82.433847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-824763 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (346.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-824763 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-824763 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m46.444793455s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-824763 -n no-preload-824763
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (346.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-181856 -n embed-certs-181856
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-181856 -n embed-certs-181856: exit status 7 (82.750267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-181856 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (349.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-181856 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-181856 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m48.85419253s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-181856 -n embed-certs-181856
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (349.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-160581 -n default-k8s-diff-port-160581
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-160581 -n default-k8s-diff-port-160581: exit status 7 (82.231897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-160581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (340.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-160581 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 13:41:49.052756 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-160581 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m40.319299476s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-160581 -n default-k8s-diff-port-160581
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (340.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-966509 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-966509 --alsologtostderr -v=3: (3.323284041s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-966509 -n old-k8s-version-966509: exit status 7 (79.968949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-966509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pz7cb" [aa78546b-dca8-439b-bfe2-9650aa653f1e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pz7cb" [aa78546b-dca8-439b-bfe2-9650aa653f1e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004767693s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zvrpw" [7c1059bf-726e-4cec-ba7e-a4f8edcb62ff] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zvrpw" [7c1059bf-726e-4cec-ba7e-a4f8edcb62ff] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005022678s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pz7cb" [aa78546b-dca8-439b-bfe2-9650aa653f1e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005708037s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-824763 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-824763 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-824763 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-824763 --alsologtostderr -v=1: (1.182987659s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-824763 -n no-preload-824763
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-824763 -n no-preload-824763: exit status 2 (299.830094ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-824763 -n no-preload-824763
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-824763 -n no-preload-824763: exit status 2 (292.675232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-824763 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-824763 -n no-preload-824763
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-824763 -n no-preload-824763
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (55.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-127631 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-127631 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (55.013359838s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (55.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zvrpw" [7c1059bf-726e-4cec-ba7e-a4f8edcb62ff] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005808832s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-181856 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-181856 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-181856 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-181856 -n embed-certs-181856
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-181856 -n embed-certs-181856: exit status 2 (296.808402ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-181856 -n embed-certs-181856
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-181856 -n embed-certs-181856: exit status 2 (306.684893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-181856 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-181856 -n embed-certs-181856
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-181856 -n embed-certs-181856
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0414 13:46:49.053414 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m15.802954103s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qqpff" [64ad70b4-ab1a-4a9c-a13e-10d435391cd0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qqpff" [64ad70b4-ab1a-4a9c-a13e-10d435391cd0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.243312785s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-127631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-127631 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.455232678s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-127631 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-127631 --alsologtostderr -v=3: (10.493726079s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qqpff" [64ad70b4-ab1a-4a9c-a13e-10d435391cd0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005158248s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-160581 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-127631 -n newest-cni-127631
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-127631 -n newest-cni-127631: exit status 7 (82.476663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-127631 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-160581 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-127631 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-127631 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (37.891977186s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-127631 -n newest-cni-127631
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-160581 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-160581 --alsologtostderr -v=1: (1.02761042s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-160581 -n default-k8s-diff-port-160581
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-160581 -n default-k8s-diff-port-160581: exit status 2 (327.327636ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-160581 -n default-k8s-diff-port-160581
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-160581 -n default-k8s-diff-port-160581: exit status 2 (339.271589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-160581 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-160581 -n default-k8s-diff-port-160581
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-160581 -n default-k8s-diff-port-160581
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m28.44804554s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-734713 "pgrep -a kubelet"
I0414 13:47:39.973977 1175746 config.go:182] Loaded profile config "auto-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-734713 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-p6njc" [c1bbfe41-54ae-42aa-9d2a-63d0c775d385] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-p6njc" [c1bbfe41-54ae-42aa-9d2a-63d0c775d385] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.003994329s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-734713 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-127631 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-127631 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-127631 -n newest-cni-127631
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-127631 -n newest-cni-127631: exit status 2 (278.099348ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-127631 -n newest-cni-127631
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-127631 -n newest-cni-127631: exit status 2 (302.773669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-127631 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-127631 -n newest-cni-127631
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-127631 -n newest-cni-127631
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (97.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m37.619710421s)
--- PASS: TestNetworkPlugins/group/calico/Start (97.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (106.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0414 13:48:18.440129 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:18.446742 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:18.458476 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:18.480123 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:18.521884 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:18.603549 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:18.765967 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:19.088107 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:19.730166 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:21.012485 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:23.574262 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:28.696395 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:48:38.937844 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m46.50776455s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (106.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6kls9" [7d4f99d6-37c0-4685-bd58-3367edc329cc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005080823s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-734713 "pgrep -a kubelet"
I0414 13:48:57.296140 1175746 config.go:182] Loaded profile config "kindnet-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-734713 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6smnf" [7226b6d3-8aec-45b8-8174-30d5df2f0997] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 13:48:59.419866 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-6smnf" [7226b6d3-8aec-45b8-8174-30d5df2f0997] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004188726s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-734713 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m5.694440671s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v6r6d" [18d980b3-0e50-4b89-ba0c-9a06d5907ccf] Running
E0414 13:49:38.305640 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:49:38.312172 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:49:38.323939 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:49:38.346093 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:49:38.387739 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:49:38.469704 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:49:38.631776 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:49:38.953621 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:49:39.595276 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:49:40.381678 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/no-preload-824763/client.crt: no such file or directory" logger="UnhandledError"
E0414 13:49:40.877114 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00471277s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-734713 "pgrep -a kubelet"
E0414 13:49:43.438477 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
I0414 13:49:43.557878 1175746 config.go:182] Loaded profile config "calico-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-734713 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-rhpgr" [0e988bd4-4270-45cc-bfae-6c0a8a446e64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0414 13:49:48.559882 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-rhpgr" [0e988bd4-4270-45cc-bfae-6c0a8a446e64] Running
E0414 13:49:52.135550 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/addons-809953/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004014503s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-734713 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-734713 "pgrep -a kubelet"
E0414 13:49:58.802105 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
I0414 13:49:58.840910 1175746 config.go:182] Loaded profile config "custom-flannel-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-734713 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-kk4jn" [bcfd851e-d166-4a16-91cd-abf0e0965a74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-kk4jn" [bcfd851e-d166-4a16-91cd-abf0e0965a74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.00419573s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-734713 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0414 13:50:19.284498 1175746 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20384-1167927/.minikube/profiles/default-k8s-diff-port-160581/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m3.795717962s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-734713 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m11.209967046s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-734713 "pgrep -a kubelet"
I0414 13:50:33.744930 1175746 config.go:182] Loaded profile config "enable-default-cni-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-734713 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-929ct" [743fcb54-f0d1-40fa-9712-55eb68f876ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-929ct" [743fcb54-f0d1-40fa-9712-55eb68f876ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.009004738s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-734713 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zgtd4" [d5898d08-c29c-474a-9f54-77cddce0d454] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004603662s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-734713 "pgrep -a kubelet"
I0414 13:51:24.168701 1175746 config.go:182] Loaded profile config "flannel-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-734713 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vrgqb" [7767771f-804f-4633-bcda-35b3dafe30b5] Pending
helpers_test.go:344: "netcat-5d86dc444-vrgqb" [7767771f-804f-4633-bcda-35b3dafe30b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vrgqb" [7767771f-804f-4633-bcda-35b3dafe30b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005203375s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-734713 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-734713 "pgrep -a kubelet"
I0414 13:51:42.877084 1175746 config.go:182] Loaded profile config "bridge-734713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-734713 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5jjjf" [d48566a8-e187-4385-93af-b8feadde56bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5jjjf" [d48566a8-e187-4385-93af-b8feadde56bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005038628s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-734713 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-734713 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (35/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-809953 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-198773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-198773
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-734713 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-734713" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-734713" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-734713

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-734713"

                                                
                                                
----------------------- debugLogs end: kubenet-734713 [took: 3.350809013s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-734713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-734713
--- SKIP: TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-734713 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-734713" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-734713

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-734713" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-734713"

                                                
                                                
----------------------- debugLogs end: cilium-734713 [took: 3.894820357s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-734713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-734713
--- SKIP: TestNetworkPlugins/group/cilium (4.06s)

                                                
                                    
Copied to clipboard