Test Report: KVM_Linux_crio 20591

                    
                      36ed4f4062413474f7b114ebc11d0835e79e9d46:2025-04-03:38987
                    
                

Test fail (10/321)

x
+
TestAddons/parallel/Ingress (182.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-445082 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-445082 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-445082 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [70b18ed9-c3b9-4c7b-83b1-fc83571346b9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [70b18ed9-c3b9-4c7b-83b1-fc83571346b9] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 41.003914395s
I0403 18:16:03.842685   21552 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-445082 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.236317274s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-445082 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.130
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-445082 -n addons-445082
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-445082 logs -n 25: (1.143951639s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-304015                                                                     | download-only-304015 | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC | 03 Apr 25 18:12 UTC |
	| delete  | -p download-only-286102                                                                     | download-only-286102 | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC | 03 Apr 25 18:12 UTC |
	| delete  | -p download-only-304015                                                                     | download-only-304015 | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC | 03 Apr 25 18:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-586392 | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC |                     |
	|         | binary-mirror-586392                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42171                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-586392                                                                     | binary-mirror-586392 | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC | 03 Apr 25 18:12 UTC |
	| addons  | disable dashboard -p                                                                        | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC |                     |
	|         | addons-445082                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC |                     |
	|         | addons-445082                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-445082 --wait=true                                                                | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC | 03 Apr 25 18:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-445082 addons disable                                                                | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:14 UTC | 03 Apr 25 18:14 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-445082 addons disable                                                                | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:14 UTC | 03 Apr 25 18:14 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:14 UTC | 03 Apr 25 18:14 UTC |
	|         | -p addons-445082                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-445082 addons disable                                                                | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:14 UTC | 03 Apr 25 18:15 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-445082 addons                                                                        | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:14 UTC | 03 Apr 25 18:14 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-445082 addons                                                                        | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:15 UTC | 03 Apr 25 18:15 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-445082 addons                                                                        | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:15 UTC | 03 Apr 25 18:15 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-445082 ip                                                                            | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:15 UTC | 03 Apr 25 18:15 UTC |
	| addons  | addons-445082 addons disable                                                                | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:15 UTC | 03 Apr 25 18:15 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-445082 addons disable                                                                | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:15 UTC | 03 Apr 25 18:15 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-445082 addons                                                                        | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:15 UTC | 03 Apr 25 18:15 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-445082 ssh curl -s                                                                   | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:16 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-445082 ssh cat                                                                       | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:16 UTC | 03 Apr 25 18:16 UTC |
	|         | /opt/local-path-provisioner/pvc-c189de65-8aca-4e94-9ce0-37185dfffce6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-445082 addons disable                                                                | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:16 UTC | 03 Apr 25 18:16 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-445082 addons                                                                        | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:16 UTC | 03 Apr 25 18:16 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-445082 addons                                                                        | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:16 UTC | 03 Apr 25 18:16 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-445082 ip                                                                            | addons-445082        | jenkins | v1.35.0 | 03 Apr 25 18:18 UTC | 03 Apr 25 18:18 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 18:12:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 18:12:19.317342   22244 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:12:19.317568   22244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:12:19.317576   22244 out.go:358] Setting ErrFile to fd 2...
	I0403 18:12:19.317580   22244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:12:19.317740   22244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 18:12:19.318341   22244 out.go:352] Setting JSON to false
	I0403 18:12:19.319178   22244 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3284,"bootTime":1743700655,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 18:12:19.319272   22244 start.go:139] virtualization: kvm guest
	I0403 18:12:19.320922   22244 out.go:177] * [addons-445082] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 18:12:19.322003   22244 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 18:12:19.322035   22244 notify.go:220] Checking for updates...
	I0403 18:12:19.324139   22244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 18:12:19.325167   22244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 18:12:19.326519   22244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 18:12:19.327817   22244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 18:12:19.329055   22244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 18:12:19.330244   22244 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 18:12:19.361307   22244 out.go:177] * Using the kvm2 driver based on user configuration
	I0403 18:12:19.362199   22244 start.go:297] selected driver: kvm2
	I0403 18:12:19.362211   22244 start.go:901] validating driver "kvm2" against <nil>
	I0403 18:12:19.362222   22244 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 18:12:19.362955   22244 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 18:12:19.363040   22244 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 18:12:19.377571   22244 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0403 18:12:19.377613   22244 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0403 18:12:19.377864   22244 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 18:12:19.377895   22244 cni.go:84] Creating CNI manager for ""
	I0403 18:12:19.377945   22244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 18:12:19.377956   22244 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0403 18:12:19.378049   22244 start.go:340] cluster config:
	{Name:addons-445082 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-445082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 18:12:19.378162   22244 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 18:12:19.379601   22244 out.go:177] * Starting "addons-445082" primary control-plane node in "addons-445082" cluster
	I0403 18:12:19.380446   22244 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 18:12:19.380478   22244 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0403 18:12:19.380485   22244 cache.go:56] Caching tarball of preloaded images
	I0403 18:12:19.380557   22244 preload.go:172] Found /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0403 18:12:19.380568   22244 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0403 18:12:19.380855   22244 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/config.json ...
	I0403 18:12:19.380876   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/config.json: {Name:mk87d2214103ac6375c69865e5831a09de0acf0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:19.381002   22244 start.go:360] acquireMachinesLock for addons-445082: {Name:mk8972215f0ab94ca7966bf5adf18262e19bccd0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0403 18:12:19.381049   22244 start.go:364] duration metric: took 33.913µs to acquireMachinesLock for "addons-445082"
	I0403 18:12:19.381066   22244 start.go:93] Provisioning new machine with config: &{Name:addons-445082 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-445082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 18:12:19.381112   22244 start.go:125] createHost starting for "" (driver="kvm2")
	I0403 18:12:19.383280   22244 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0403 18:12:19.383411   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:12:19.383449   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:12:19.397304   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
	I0403 18:12:19.397719   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:12:19.398282   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:12:19.398306   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:12:19.398601   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:12:19.398781   22244 main.go:141] libmachine: (addons-445082) Calling .GetMachineName
	I0403 18:12:19.398923   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:12:19.399036   22244 start.go:159] libmachine.API.Create for "addons-445082" (driver="kvm2")
	I0403 18:12:19.399066   22244 client.go:168] LocalClient.Create starting
	I0403 18:12:19.399108   22244 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem
	I0403 18:12:19.490615   22244 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem
	I0403 18:12:20.435728   22244 main.go:141] libmachine: Running pre-create checks...
	I0403 18:12:20.435765   22244 main.go:141] libmachine: (addons-445082) Calling .PreCreateCheck
	I0403 18:12:20.436282   22244 main.go:141] libmachine: (addons-445082) Calling .GetConfigRaw
	I0403 18:12:20.436738   22244 main.go:141] libmachine: Creating machine...
	I0403 18:12:20.436754   22244 main.go:141] libmachine: (addons-445082) Calling .Create
	I0403 18:12:20.436913   22244 main.go:141] libmachine: (addons-445082) creating KVM machine...
	I0403 18:12:20.436930   22244 main.go:141] libmachine: (addons-445082) creating network...
	I0403 18:12:20.438234   22244 main.go:141] libmachine: (addons-445082) DBG | found existing default KVM network
	I0403 18:12:20.438929   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:20.438764   22266 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112dd0}
	I0403 18:12:20.438964   22244 main.go:141] libmachine: (addons-445082) DBG | created network xml: 
	I0403 18:12:20.438977   22244 main.go:141] libmachine: (addons-445082) DBG | <network>
	I0403 18:12:20.438990   22244 main.go:141] libmachine: (addons-445082) DBG |   <name>mk-addons-445082</name>
	I0403 18:12:20.439000   22244 main.go:141] libmachine: (addons-445082) DBG |   <dns enable='no'/>
	I0403 18:12:20.439010   22244 main.go:141] libmachine: (addons-445082) DBG |   
	I0403 18:12:20.439020   22244 main.go:141] libmachine: (addons-445082) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0403 18:12:20.439027   22244 main.go:141] libmachine: (addons-445082) DBG |     <dhcp>
	I0403 18:12:20.439038   22244 main.go:141] libmachine: (addons-445082) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0403 18:12:20.439044   22244 main.go:141] libmachine: (addons-445082) DBG |     </dhcp>
	I0403 18:12:20.439053   22244 main.go:141] libmachine: (addons-445082) DBG |   </ip>
	I0403 18:12:20.439063   22244 main.go:141] libmachine: (addons-445082) DBG |   
	I0403 18:12:20.439072   22244 main.go:141] libmachine: (addons-445082) DBG | </network>
	I0403 18:12:20.439085   22244 main.go:141] libmachine: (addons-445082) DBG | 
	I0403 18:12:20.480078   22244 main.go:141] libmachine: (addons-445082) DBG | trying to create private KVM network mk-addons-445082 192.168.39.0/24...
	I0403 18:12:20.545072   22244 main.go:141] libmachine: (addons-445082) DBG | private KVM network mk-addons-445082 192.168.39.0/24 created
	I0403 18:12:20.545098   22244 main.go:141] libmachine: (addons-445082) setting up store path in /home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082 ...
	I0403 18:12:20.545107   22244 main.go:141] libmachine: (addons-445082) building disk image from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0403 18:12:20.545114   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:20.545045   22266 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 18:12:20.545304   22244 main.go:141] libmachine: (addons-445082) Downloading /home/jenkins/minikube-integration/20591-14371/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0403 18:12:20.817823   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:20.817688   22266 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa...
	I0403 18:12:21.128646   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:21.128478   22266 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/addons-445082.rawdisk...
	I0403 18:12:21.128683   22244 main.go:141] libmachine: (addons-445082) DBG | Writing magic tar header
	I0403 18:12:21.128706   22244 main.go:141] libmachine: (addons-445082) DBG | Writing SSH key tar header
	I0403 18:12:21.128717   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:21.128627   22266 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082 ...
	I0403 18:12:21.128740   22244 main.go:141] libmachine: (addons-445082) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082
	I0403 18:12:21.128768   22244 main.go:141] libmachine: (addons-445082) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines
	I0403 18:12:21.128790   22244 main.go:141] libmachine: (addons-445082) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082 (perms=drwx------)
	I0403 18:12:21.128801   22244 main.go:141] libmachine: (addons-445082) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines (perms=drwxr-xr-x)
	I0403 18:12:21.128813   22244 main.go:141] libmachine: (addons-445082) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 18:12:21.128825   22244 main.go:141] libmachine: (addons-445082) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube (perms=drwxr-xr-x)
	I0403 18:12:21.128837   22244 main.go:141] libmachine: (addons-445082) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371
	I0403 18:12:21.128850   22244 main.go:141] libmachine: (addons-445082) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0403 18:12:21.128860   22244 main.go:141] libmachine: (addons-445082) DBG | checking permissions on dir: /home/jenkins
	I0403 18:12:21.128871   22244 main.go:141] libmachine: (addons-445082) DBG | checking permissions on dir: /home
	I0403 18:12:21.128878   22244 main.go:141] libmachine: (addons-445082) DBG | skipping /home - not owner
	I0403 18:12:21.128910   22244 main.go:141] libmachine: (addons-445082) setting executable bit set on /home/jenkins/minikube-integration/20591-14371 (perms=drwxrwxr-x)
	I0403 18:12:21.128940   22244 main.go:141] libmachine: (addons-445082) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0403 18:12:21.128954   22244 main.go:141] libmachine: (addons-445082) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0403 18:12:21.128965   22244 main.go:141] libmachine: (addons-445082) creating domain...
	I0403 18:12:21.129931   22244 main.go:141] libmachine: (addons-445082) define libvirt domain using xml: 
	I0403 18:12:21.129948   22244 main.go:141] libmachine: (addons-445082) <domain type='kvm'>
	I0403 18:12:21.129957   22244 main.go:141] libmachine: (addons-445082)   <name>addons-445082</name>
	I0403 18:12:21.129964   22244 main.go:141] libmachine: (addons-445082)   <memory unit='MiB'>4000</memory>
	I0403 18:12:21.129970   22244 main.go:141] libmachine: (addons-445082)   <vcpu>2</vcpu>
	I0403 18:12:21.129974   22244 main.go:141] libmachine: (addons-445082)   <features>
	I0403 18:12:21.130001   22244 main.go:141] libmachine: (addons-445082)     <acpi/>
	I0403 18:12:21.130005   22244 main.go:141] libmachine: (addons-445082)     <apic/>
	I0403 18:12:21.130010   22244 main.go:141] libmachine: (addons-445082)     <pae/>
	I0403 18:12:21.130013   22244 main.go:141] libmachine: (addons-445082)     
	I0403 18:12:21.130018   22244 main.go:141] libmachine: (addons-445082)   </features>
	I0403 18:12:21.130022   22244 main.go:141] libmachine: (addons-445082)   <cpu mode='host-passthrough'>
	I0403 18:12:21.130049   22244 main.go:141] libmachine: (addons-445082)   
	I0403 18:12:21.130062   22244 main.go:141] libmachine: (addons-445082)   </cpu>
	I0403 18:12:21.130068   22244 main.go:141] libmachine: (addons-445082)   <os>
	I0403 18:12:21.130075   22244 main.go:141] libmachine: (addons-445082)     <type>hvm</type>
	I0403 18:12:21.130083   22244 main.go:141] libmachine: (addons-445082)     <boot dev='cdrom'/>
	I0403 18:12:21.130087   22244 main.go:141] libmachine: (addons-445082)     <boot dev='hd'/>
	I0403 18:12:21.130091   22244 main.go:141] libmachine: (addons-445082)     <bootmenu enable='no'/>
	I0403 18:12:21.130095   22244 main.go:141] libmachine: (addons-445082)   </os>
	I0403 18:12:21.130100   22244 main.go:141] libmachine: (addons-445082)   <devices>
	I0403 18:12:21.130107   22244 main.go:141] libmachine: (addons-445082)     <disk type='file' device='cdrom'>
	I0403 18:12:21.130115   22244 main.go:141] libmachine: (addons-445082)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/boot2docker.iso'/>
	I0403 18:12:21.130129   22244 main.go:141] libmachine: (addons-445082)       <target dev='hdc' bus='scsi'/>
	I0403 18:12:21.130150   22244 main.go:141] libmachine: (addons-445082)       <readonly/>
	I0403 18:12:21.130156   22244 main.go:141] libmachine: (addons-445082)     </disk>
	I0403 18:12:21.130181   22244 main.go:141] libmachine: (addons-445082)     <disk type='file' device='disk'>
	I0403 18:12:21.130204   22244 main.go:141] libmachine: (addons-445082)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0403 18:12:21.130220   22244 main.go:141] libmachine: (addons-445082)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/addons-445082.rawdisk'/>
	I0403 18:12:21.130231   22244 main.go:141] libmachine: (addons-445082)       <target dev='hda' bus='virtio'/>
	I0403 18:12:21.130244   22244 main.go:141] libmachine: (addons-445082)     </disk>
	I0403 18:12:21.130255   22244 main.go:141] libmachine: (addons-445082)     <interface type='network'>
	I0403 18:12:21.130279   22244 main.go:141] libmachine: (addons-445082)       <source network='mk-addons-445082'/>
	I0403 18:12:21.130292   22244 main.go:141] libmachine: (addons-445082)       <model type='virtio'/>
	I0403 18:12:21.130320   22244 main.go:141] libmachine: (addons-445082)     </interface>
	I0403 18:12:21.130343   22244 main.go:141] libmachine: (addons-445082)     <interface type='network'>
	I0403 18:12:21.130357   22244 main.go:141] libmachine: (addons-445082)       <source network='default'/>
	I0403 18:12:21.130369   22244 main.go:141] libmachine: (addons-445082)       <model type='virtio'/>
	I0403 18:12:21.130374   22244 main.go:141] libmachine: (addons-445082)     </interface>
	I0403 18:12:21.130381   22244 main.go:141] libmachine: (addons-445082)     <serial type='pty'>
	I0403 18:12:21.130385   22244 main.go:141] libmachine: (addons-445082)       <target port='0'/>
	I0403 18:12:21.130391   22244 main.go:141] libmachine: (addons-445082)     </serial>
	I0403 18:12:21.130396   22244 main.go:141] libmachine: (addons-445082)     <console type='pty'>
	I0403 18:12:21.130402   22244 main.go:141] libmachine: (addons-445082)       <target type='serial' port='0'/>
	I0403 18:12:21.130407   22244 main.go:141] libmachine: (addons-445082)     </console>
	I0403 18:12:21.130413   22244 main.go:141] libmachine: (addons-445082)     <rng model='virtio'>
	I0403 18:12:21.130419   22244 main.go:141] libmachine: (addons-445082)       <backend model='random'>/dev/random</backend>
	I0403 18:12:21.130425   22244 main.go:141] libmachine: (addons-445082)     </rng>
	I0403 18:12:21.130430   22244 main.go:141] libmachine: (addons-445082)     
	I0403 18:12:21.130442   22244 main.go:141] libmachine: (addons-445082)     
	I0403 18:12:21.130454   22244 main.go:141] libmachine: (addons-445082)   </devices>
	I0403 18:12:21.130468   22244 main.go:141] libmachine: (addons-445082) </domain>
	I0403 18:12:21.130485   22244 main.go:141] libmachine: (addons-445082) 
	I0403 18:12:21.157858   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:01:11:5a in network default
	I0403 18:12:21.158393   22244 main.go:141] libmachine: (addons-445082) starting domain...
	I0403 18:12:21.158415   22244 main.go:141] libmachine: (addons-445082) ensuring networks are active...
	I0403 18:12:21.158426   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:21.159029   22244 main.go:141] libmachine: (addons-445082) Ensuring network default is active
	I0403 18:12:21.159384   22244 main.go:141] libmachine: (addons-445082) Ensuring network mk-addons-445082 is active
	I0403 18:12:21.159935   22244 main.go:141] libmachine: (addons-445082) getting domain XML...
	I0403 18:12:21.160718   22244 main.go:141] libmachine: (addons-445082) creating domain...
	I0403 18:12:22.598847   22244 main.go:141] libmachine: (addons-445082) waiting for IP...
	I0403 18:12:22.599591   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:22.599938   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:22.599968   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:22.599927   22266 retry.go:31] will retry after 304.876692ms: waiting for domain to come up
	I0403 18:12:22.906410   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:22.906985   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:22.907011   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:22.906948   22266 retry.go:31] will retry after 277.414734ms: waiting for domain to come up
	I0403 18:12:23.186421   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:23.186883   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:23.186988   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:23.186847   22266 retry.go:31] will retry after 406.823558ms: waiting for domain to come up
	I0403 18:12:23.595489   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:23.595882   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:23.595909   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:23.595856   22266 retry.go:31] will retry after 426.186972ms: waiting for domain to come up
	I0403 18:12:24.023437   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:24.023894   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:24.023939   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:24.023874   22266 retry.go:31] will retry after 490.282914ms: waiting for domain to come up
	I0403 18:12:24.515955   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:24.516304   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:24.516325   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:24.516274   22266 retry.go:31] will retry after 897.800662ms: waiting for domain to come up
	I0403 18:12:25.415304   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:25.415695   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:25.415716   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:25.415667   22266 retry.go:31] will retry after 985.855288ms: waiting for domain to come up
	I0403 18:12:26.402746   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:26.403203   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:26.403228   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:26.403130   22266 retry.go:31] will retry after 1.089578208s: waiting for domain to come up
	I0403 18:12:27.494531   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:27.494870   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:27.494896   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:27.494856   22266 retry.go:31] will retry after 1.140693188s: waiting for domain to come up
	I0403 18:12:28.637053   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:28.637443   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:28.637482   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:28.637425   22266 retry.go:31] will retry after 2.03093909s: waiting for domain to come up
	I0403 18:12:30.670137   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:30.670677   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:30.670715   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:30.670616   22266 retry.go:31] will retry after 2.33170255s: waiting for domain to come up
	I0403 18:12:33.003676   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:33.004070   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:33.004119   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:33.004051   22266 retry.go:31] will retry after 2.897343896s: waiting for domain to come up
	I0403 18:12:35.903573   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:35.904018   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:35.904038   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:35.903984   22266 retry.go:31] will retry after 3.62823283s: waiting for domain to come up
	I0403 18:12:39.534118   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:39.534644   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find current IP address of domain addons-445082 in network mk-addons-445082
	I0403 18:12:39.534674   22244 main.go:141] libmachine: (addons-445082) DBG | I0403 18:12:39.534603   22266 retry.go:31] will retry after 4.536108234s: waiting for domain to come up
	I0403 18:12:44.077170   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.077650   22244 main.go:141] libmachine: (addons-445082) found domain IP: 192.168.39.130
	I0403 18:12:44.077675   22244 main.go:141] libmachine: (addons-445082) reserving static IP address...
	I0403 18:12:44.077688   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has current primary IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.078017   22244 main.go:141] libmachine: (addons-445082) DBG | unable to find host DHCP lease matching {name: "addons-445082", mac: "52:54:00:e7:df:ce", ip: "192.168.39.130"} in network mk-addons-445082
	I0403 18:12:44.151529   22244 main.go:141] libmachine: (addons-445082) DBG | Getting to WaitForSSH function...
	I0403 18:12:44.151560   22244 main.go:141] libmachine: (addons-445082) reserved static IP address 192.168.39.130 for domain addons-445082
	I0403 18:12:44.151573   22244 main.go:141] libmachine: (addons-445082) waiting for SSH...
	I0403 18:12:44.154115   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.154441   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:44.154472   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.154628   22244 main.go:141] libmachine: (addons-445082) DBG | Using SSH client type: external
	I0403 18:12:44.154647   22244 main.go:141] libmachine: (addons-445082) DBG | Using SSH private key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa (-rw-------)
	I0403 18:12:44.154662   22244 main.go:141] libmachine: (addons-445082) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0403 18:12:44.154682   22244 main.go:141] libmachine: (addons-445082) DBG | About to run SSH command:
	I0403 18:12:44.154687   22244 main.go:141] libmachine: (addons-445082) DBG | exit 0
	I0403 18:12:44.282649   22244 main.go:141] libmachine: (addons-445082) DBG | SSH cmd err, output: <nil>: 
	I0403 18:12:44.282963   22244 main.go:141] libmachine: (addons-445082) KVM machine creation complete
	I0403 18:12:44.283228   22244 main.go:141] libmachine: (addons-445082) Calling .GetConfigRaw
	I0403 18:12:44.283780   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:12:44.283955   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:12:44.284123   22244 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0403 18:12:44.284142   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:12:44.285239   22244 main.go:141] libmachine: Detecting operating system of created instance...
	I0403 18:12:44.285255   22244 main.go:141] libmachine: Waiting for SSH to be available...
	I0403 18:12:44.285262   22244 main.go:141] libmachine: Getting to WaitForSSH function...
	I0403 18:12:44.285270   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:12:44.287423   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.287746   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:44.287773   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.287911   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:12:44.288071   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:44.288219   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:44.288356   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:12:44.288495   22244 main.go:141] libmachine: Using SSH client type: native
	I0403 18:12:44.288741   22244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0403 18:12:44.288752   22244 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0403 18:12:44.385996   22244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 18:12:44.386027   22244 main.go:141] libmachine: Detecting the provisioner...
	I0403 18:12:44.386035   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:12:44.388668   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.388956   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:44.388982   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.389124   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:12:44.389304   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:44.389441   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:44.389560   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:12:44.389688   22244 main.go:141] libmachine: Using SSH client type: native
	I0403 18:12:44.389885   22244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0403 18:12:44.389898   22244 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0403 18:12:44.487237   22244 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0403 18:12:44.487324   22244 main.go:141] libmachine: found compatible host: buildroot
	I0403 18:12:44.487336   22244 main.go:141] libmachine: Provisioning with buildroot...
	I0403 18:12:44.487347   22244 main.go:141] libmachine: (addons-445082) Calling .GetMachineName
	I0403 18:12:44.487613   22244 buildroot.go:166] provisioning hostname "addons-445082"
	I0403 18:12:44.487641   22244 main.go:141] libmachine: (addons-445082) Calling .GetMachineName
	I0403 18:12:44.487818   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:12:44.490443   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.490742   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:44.490769   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.490888   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:12:44.491051   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:44.491164   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:44.491272   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:12:44.491395   22244 main.go:141] libmachine: Using SSH client type: native
	I0403 18:12:44.491661   22244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0403 18:12:44.491675   22244 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-445082 && echo "addons-445082" | sudo tee /etc/hostname
	I0403 18:12:44.604219   22244 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-445082
	
	I0403 18:12:44.604249   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:12:44.606724   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.607059   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:44.607086   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.607258   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:12:44.607433   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:44.607556   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:44.607686   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:12:44.607860   22244 main.go:141] libmachine: Using SSH client type: native
	I0403 18:12:44.608054   22244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0403 18:12:44.608072   22244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-445082' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-445082/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-445082' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0403 18:12:44.714910   22244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 18:12:44.714941   22244 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20591-14371/.minikube CaCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20591-14371/.minikube}
	I0403 18:12:44.714963   22244 buildroot.go:174] setting up certificates
	I0403 18:12:44.714977   22244 provision.go:84] configureAuth start
	I0403 18:12:44.714989   22244 main.go:141] libmachine: (addons-445082) Calling .GetMachineName
	I0403 18:12:44.715287   22244 main.go:141] libmachine: (addons-445082) Calling .GetIP
	I0403 18:12:44.717635   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.717923   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:44.717938   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.718081   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:12:44.720137   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.720417   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:44.720433   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.720626   22244 provision.go:143] copyHostCerts
	I0403 18:12:44.720695   22244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem (1123 bytes)
	I0403 18:12:44.720816   22244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem (1675 bytes)
	I0403 18:12:44.720903   22244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem (1082 bytes)
	I0403 18:12:44.720969   22244 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem org=jenkins.addons-445082 san=[127.0.0.1 192.168.39.130 addons-445082 localhost minikube]
	I0403 18:12:44.836708   22244 provision.go:177] copyRemoteCerts
	I0403 18:12:44.836766   22244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0403 18:12:44.836787   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:12:44.839207   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.839521   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:44.839537   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.839696   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:12:44.839864   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:44.839999   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:12:44.840119   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:12:44.916312   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0403 18:12:44.938094   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0403 18:12:44.959387   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0403 18:12:44.981186   22244 provision.go:87] duration metric: took 266.198321ms to configureAuth
	I0403 18:12:44.981214   22244 buildroot.go:189] setting minikube options for container-runtime
	I0403 18:12:44.981367   22244 config.go:182] Loaded profile config "addons-445082": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 18:12:44.981433   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:12:44.984279   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.984607   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:44.984637   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:44.984808   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:12:44.984985   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:44.985143   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:44.985329   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:12:44.985476   22244 main.go:141] libmachine: Using SSH client type: native
	I0403 18:12:44.985708   22244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0403 18:12:44.985728   22244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0403 18:12:45.199529   22244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0403 18:12:45.199565   22244 main.go:141] libmachine: Checking connection to Docker...
	I0403 18:12:45.199573   22244 main.go:141] libmachine: (addons-445082) Calling .GetURL
	I0403 18:12:45.200718   22244 main.go:141] libmachine: (addons-445082) DBG | using libvirt version 6000000
	I0403 18:12:45.202870   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.203153   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:45.203173   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.203341   22244 main.go:141] libmachine: Docker is up and running!
	I0403 18:12:45.203353   22244 main.go:141] libmachine: Reticulating splines...
	I0403 18:12:45.203359   22244 client.go:171] duration metric: took 25.80428383s to LocalClient.Create
	I0403 18:12:45.203381   22244 start.go:167] duration metric: took 25.804343411s to libmachine.API.Create "addons-445082"
	I0403 18:12:45.203393   22244 start.go:293] postStartSetup for "addons-445082" (driver="kvm2")
	I0403 18:12:45.203405   22244 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0403 18:12:45.203429   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:12:45.203639   22244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0403 18:12:45.203656   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:12:45.205507   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.205810   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:45.205838   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.205988   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:12:45.206144   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:45.206281   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:12:45.206403   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:12:45.285151   22244 ssh_runner.go:195] Run: cat /etc/os-release
	I0403 18:12:45.289488   22244 info.go:137] Remote host: Buildroot 2023.02.9
	I0403 18:12:45.289515   22244 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/addons for local assets ...
	I0403 18:12:45.289580   22244 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/files for local assets ...
	I0403 18:12:45.289602   22244 start.go:296] duration metric: took 86.202745ms for postStartSetup
	I0403 18:12:45.289630   22244 main.go:141] libmachine: (addons-445082) Calling .GetConfigRaw
	I0403 18:12:45.290143   22244 main.go:141] libmachine: (addons-445082) Calling .GetIP
	I0403 18:12:45.292575   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.292956   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:45.293015   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.293279   22244 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/config.json ...
	I0403 18:12:45.293501   22244 start.go:128] duration metric: took 25.91237859s to createHost
	I0403 18:12:45.293526   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:12:45.295688   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.296014   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:45.296039   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.296149   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:12:45.296311   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:45.296451   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:45.296554   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:12:45.296692   22244 main.go:141] libmachine: Using SSH client type: native
	I0403 18:12:45.296877   22244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.130 22 <nil> <nil>}
	I0403 18:12:45.296887   22244 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0403 18:12:45.395109   22244 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743703965.370470551
	
	I0403 18:12:45.395138   22244 fix.go:216] guest clock: 1743703965.370470551
	I0403 18:12:45.395149   22244 fix.go:229] Guest: 2025-04-03 18:12:45.370470551 +0000 UTC Remote: 2025-04-03 18:12:45.293515146 +0000 UTC m=+26.010037910 (delta=76.955405ms)
	I0403 18:12:45.395180   22244 fix.go:200] guest clock delta is within tolerance: 76.955405ms
	I0403 18:12:45.395187   22244 start.go:83] releasing machines lock for "addons-445082", held for 26.014128669s
	I0403 18:12:45.395225   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:12:45.395479   22244 main.go:141] libmachine: (addons-445082) Calling .GetIP
	I0403 18:12:45.397903   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.398242   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:45.398280   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.398504   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:12:45.398936   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:12:45.399104   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:12:45.399202   22244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0403 18:12:45.399237   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:12:45.399332   22244 ssh_runner.go:195] Run: cat /version.json
	I0403 18:12:45.399358   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:12:45.401819   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.402093   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.402125   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:45.402149   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.402268   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:12:45.402432   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:45.402528   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:45.402558   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:45.402569   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:12:45.402681   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:12:45.402725   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:12:45.402859   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:12:45.402986   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:12:45.403100   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:12:45.510899   22244 ssh_runner.go:195] Run: systemctl --version
	I0403 18:12:45.516594   22244 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0403 18:12:45.670784   22244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0403 18:12:45.676355   22244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0403 18:12:45.676411   22244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0403 18:12:45.691740   22244 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0403 18:12:45.691769   22244 start.go:495] detecting cgroup driver to use...
	I0403 18:12:45.691831   22244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0403 18:12:45.707055   22244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0403 18:12:45.720261   22244 docker.go:217] disabling cri-docker service (if available) ...
	I0403 18:12:45.720311   22244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0403 18:12:45.732980   22244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0403 18:12:45.745710   22244 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0403 18:12:45.853273   22244 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0403 18:12:46.017782   22244 docker.go:233] disabling docker service ...
	I0403 18:12:46.017839   22244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0403 18:12:46.034682   22244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0403 18:12:46.047352   22244 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0403 18:12:46.169056   22244 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0403 18:12:46.285855   22244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0403 18:12:46.299777   22244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0403 18:12:46.317251   22244 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0403 18:12:46.317315   22244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 18:12:46.327626   22244 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0403 18:12:46.327696   22244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 18:12:46.338214   22244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 18:12:46.348792   22244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 18:12:46.359282   22244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0403 18:12:46.369408   22244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 18:12:46.379389   22244 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 18:12:46.395431   22244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 18:12:46.405435   22244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0403 18:12:46.414567   22244 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0403 18:12:46.414629   22244 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0403 18:12:46.427347   22244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0403 18:12:46.436578   22244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 18:12:46.547263   22244 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0403 18:12:46.632235   22244 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0403 18:12:46.632333   22244 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0403 18:12:46.638605   22244 start.go:563] Will wait 60s for crictl version
	I0403 18:12:46.638682   22244 ssh_runner.go:195] Run: which crictl
	I0403 18:12:46.642219   22244 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0403 18:12:46.676173   22244 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0403 18:12:46.676288   22244 ssh_runner.go:195] Run: crio --version
	I0403 18:12:46.703511   22244 ssh_runner.go:195] Run: crio --version
	I0403 18:12:46.730748   22244 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0403 18:12:46.731953   22244 main.go:141] libmachine: (addons-445082) Calling .GetIP
	I0403 18:12:46.734543   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:46.734891   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:12:46.734923   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:12:46.735093   22244 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0403 18:12:46.739091   22244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 18:12:46.751089   22244 kubeadm.go:883] updating cluster {Name:addons-445082 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-445082 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0403 18:12:46.751192   22244 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 18:12:46.751234   22244 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 18:12:46.782047   22244 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0403 18:12:46.782109   22244 ssh_runner.go:195] Run: which lz4
	I0403 18:12:46.785834   22244 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0403 18:12:46.789538   22244 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0403 18:12:46.789564   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0403 18:12:47.963439   22244 crio.go:462] duration metric: took 1.177640755s to copy over tarball
	I0403 18:12:47.963520   22244 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0403 18:12:50.112182   22244 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.148630516s)
	I0403 18:12:50.112225   22244 crio.go:469] duration metric: took 2.148759683s to extract the tarball
	I0403 18:12:50.112235   22244 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0403 18:12:50.149007   22244 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 18:12:50.189804   22244 crio.go:514] all images are preloaded for cri-o runtime.
	I0403 18:12:50.189825   22244 cache_images.go:84] Images are preloaded, skipping loading
	I0403 18:12:50.189832   22244 kubeadm.go:934] updating node { 192.168.39.130 8443 v1.32.2 crio true true} ...
	I0403 18:12:50.189921   22244 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-445082 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-445082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0403 18:12:50.189981   22244 ssh_runner.go:195] Run: crio config
	I0403 18:12:50.235232   22244 cni.go:84] Creating CNI manager for ""
	I0403 18:12:50.235255   22244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 18:12:50.235265   22244 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0403 18:12:50.235285   22244 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.130 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-445082 NodeName:addons-445082 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0403 18:12:50.235391   22244 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-445082"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.130"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.130"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0403 18:12:50.235446   22244 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0403 18:12:50.244598   22244 binaries.go:44] Found k8s binaries, skipping transfer
	I0403 18:12:50.244671   22244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0403 18:12:50.253503   22244 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0403 18:12:50.268658   22244 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0403 18:12:50.283292   22244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0403 18:12:50.298008   22244 ssh_runner.go:195] Run: grep 192.168.39.130	control-plane.minikube.internal$ /etc/hosts
	I0403 18:12:50.301452   22244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 18:12:50.312162   22244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 18:12:50.436478   22244 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 18:12:50.451578   22244 certs.go:68] Setting up /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082 for IP: 192.168.39.130
	I0403 18:12:50.451610   22244 certs.go:194] generating shared ca certs ...
	I0403 18:12:50.451629   22244 certs.go:226] acquiring lock for ca certs: {Name:mk34f09724355b828d31eb1ee97771fd3b4645c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:50.451782   22244 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key
	I0403 18:12:50.505915   22244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt ...
	I0403 18:12:50.505946   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt: {Name:mkc7b1684fa8cbb6e11fb50d2f181d4ce6099738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:50.506131   22244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key ...
	I0403 18:12:50.506145   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key: {Name:mkcf3752073eb8ae49a14c076d9faa35f3992794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:50.506243   22244 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key
	I0403 18:12:51.056740   22244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.crt ...
	I0403 18:12:51.056770   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.crt: {Name:mk927918248057625e6f2ef4b9f18a7442c9cc8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:51.056957   22244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key ...
	I0403 18:12:51.056972   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key: {Name:mk6e798bae313144a03ae27504a46693c9cf71f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:51.057071   22244 certs.go:256] generating profile certs ...
	I0403 18:12:51.057132   22244 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.key
	I0403 18:12:51.057151   22244 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt with IP's: []
	I0403 18:12:51.204941   22244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt ...
	I0403 18:12:51.204971   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: {Name:mkfd277e3ae255036ce99aa491847e20cb554b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:51.205161   22244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.key ...
	I0403 18:12:51.205176   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.key: {Name:mk4fbf4891bd2c33d6b677f6abecacff75f980db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:51.205279   22244 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.key.2bee6a9d
	I0403 18:12:51.205329   22244 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.crt.2bee6a9d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.130]
	I0403 18:12:51.324409   22244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.crt.2bee6a9d ...
	I0403 18:12:51.324438   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.crt.2bee6a9d: {Name:mk680ab6d27875eb3575a04cb16dcab4426fa1d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:51.324625   22244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.key.2bee6a9d ...
	I0403 18:12:51.324641   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.key.2bee6a9d: {Name:mk8da20cc44602b407576a0e76f3de69a200bb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:51.324737   22244 certs.go:381] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.crt.2bee6a9d -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.crt
	I0403 18:12:51.324810   22244 certs.go:385] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.key.2bee6a9d -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.key
	I0403 18:12:51.324855   22244 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/proxy-client.key
	I0403 18:12:51.324870   22244 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/proxy-client.crt with IP's: []
	I0403 18:12:51.385324   22244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/proxy-client.crt ...
	I0403 18:12:51.385353   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/proxy-client.crt: {Name:mk4b2afd50875ad88d5c312bc1d22110a4d37664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:51.385515   22244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/proxy-client.key ...
	I0403 18:12:51.385530   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/proxy-client.key: {Name:mk3fa4a8d5eda1e0395cec94aa20704709281fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:51.385707   22244 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem (1675 bytes)
	I0403 18:12:51.385739   22244 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem (1082 bytes)
	I0403 18:12:51.385767   22244 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem (1123 bytes)
	I0403 18:12:51.385789   22244 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem (1675 bytes)
	I0403 18:12:51.386279   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0403 18:12:51.409010   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0403 18:12:51.430072   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0403 18:12:51.450571   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0403 18:12:51.471609   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0403 18:12:51.492613   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0403 18:12:51.513154   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0403 18:12:51.534192   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0403 18:12:51.554895   22244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0403 18:12:51.576219   22244 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0403 18:12:51.591637   22244 ssh_runner.go:195] Run: openssl version
	I0403 18:12:51.597297   22244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0403 18:12:51.607267   22244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0403 18:12:51.611280   22244 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  3 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0403 18:12:51.611341   22244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0403 18:12:51.616640   22244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0403 18:12:51.627326   22244 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0403 18:12:51.631198   22244 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0403 18:12:51.631259   22244 kubeadm.go:392] StartCluster: {Name:addons-445082 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-445082 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 18:12:51.631342   22244 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0403 18:12:51.631420   22244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0403 18:12:51.664104   22244 cri.go:89] found id: ""
	I0403 18:12:51.664165   22244 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0403 18:12:51.673542   22244 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0403 18:12:51.685752   22244 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 18:12:51.696103   22244 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 18:12:51.696122   22244 kubeadm.go:157] found existing configuration files:
	
	I0403 18:12:51.696174   22244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 18:12:51.704890   22244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 18:12:51.704950   22244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 18:12:51.713973   22244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 18:12:51.722663   22244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 18:12:51.722724   22244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 18:12:51.731716   22244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 18:12:51.740017   22244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 18:12:51.740075   22244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 18:12:51.749056   22244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 18:12:51.757786   22244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 18:12:51.757848   22244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 18:12:51.766686   22244 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0403 18:12:51.835784   22244 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0403 18:12:51.836350   22244 kubeadm.go:310] [preflight] Running pre-flight checks
	I0403 18:12:51.933553   22244 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0403 18:12:51.933651   22244 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0403 18:12:51.933760   22244 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0403 18:12:51.943092   22244 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0403 18:12:52.035383   22244 out.go:235]   - Generating certificates and keys ...
	I0403 18:12:52.035513   22244 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0403 18:12:52.035601   22244 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0403 18:12:52.511424   22244 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0403 18:12:52.768605   22244 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0403 18:12:52.848249   22244 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0403 18:12:53.062352   22244 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0403 18:12:53.112905   22244 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0403 18:12:53.113172   22244 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-445082 localhost] and IPs [192.168.39.130 127.0.0.1 ::1]
	I0403 18:12:53.292974   22244 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0403 18:12:53.293259   22244 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-445082 localhost] and IPs [192.168.39.130 127.0.0.1 ::1]
	I0403 18:12:53.364430   22244 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0403 18:12:53.633387   22244 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0403 18:12:53.711974   22244 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0403 18:12:53.712113   22244 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0403 18:12:53.980899   22244 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0403 18:12:54.133657   22244 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0403 18:12:54.209982   22244 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0403 18:12:54.329140   22244 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0403 18:12:54.456609   22244 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0403 18:12:54.457885   22244 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0403 18:12:54.461321   22244 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0403 18:12:54.463055   22244 out.go:235]   - Booting up control plane ...
	I0403 18:12:54.463201   22244 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0403 18:12:54.463323   22244 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0403 18:12:54.463418   22244 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0403 18:12:54.477796   22244 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0403 18:12:54.484502   22244 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0403 18:12:54.484558   22244 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0403 18:12:54.602656   22244 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0403 18:12:54.602892   22244 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0403 18:12:55.104000   22244 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.394264ms
	I0403 18:12:55.104147   22244 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0403 18:12:59.602190   22244 kubeadm.go:310] [api-check] The API server is healthy after 4.501192559s
	I0403 18:12:59.616373   22244 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0403 18:12:59.633349   22244 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0403 18:12:59.654583   22244 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0403 18:12:59.654816   22244 kubeadm.go:310] [mark-control-plane] Marking the node addons-445082 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0403 18:12:59.664097   22244 kubeadm.go:310] [bootstrap-token] Using token: nmwuj5.0kd5oiqnmr4nxh2a
	I0403 18:12:59.665223   22244 out.go:235]   - Configuring RBAC rules ...
	I0403 18:12:59.665357   22244 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0403 18:12:59.672001   22244 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0403 18:12:59.677709   22244 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0403 18:12:59.680284   22244 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0403 18:12:59.682957   22244 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0403 18:12:59.685996   22244 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0403 18:13:00.008328   22244 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0403 18:13:00.442523   22244 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0403 18:13:01.006847   22244 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0403 18:13:01.007851   22244 kubeadm.go:310] 
	I0403 18:13:01.007972   22244 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0403 18:13:01.007986   22244 kubeadm.go:310] 
	I0403 18:13:01.008100   22244 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0403 18:13:01.008116   22244 kubeadm.go:310] 
	I0403 18:13:01.008152   22244 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0403 18:13:01.008249   22244 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0403 18:13:01.008323   22244 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0403 18:13:01.008339   22244 kubeadm.go:310] 
	I0403 18:13:01.008432   22244 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0403 18:13:01.008452   22244 kubeadm.go:310] 
	I0403 18:13:01.008523   22244 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0403 18:13:01.008536   22244 kubeadm.go:310] 
	I0403 18:13:01.008612   22244 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0403 18:13:01.008723   22244 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0403 18:13:01.008808   22244 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0403 18:13:01.008816   22244 kubeadm.go:310] 
	I0403 18:13:01.008930   22244 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0403 18:13:01.009031   22244 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0403 18:13:01.009045   22244 kubeadm.go:310] 
	I0403 18:13:01.009161   22244 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nmwuj5.0kd5oiqnmr4nxh2a \
	I0403 18:13:01.009410   22244 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 \
	I0403 18:13:01.009456   22244 kubeadm.go:310] 	--control-plane 
	I0403 18:13:01.009467   22244 kubeadm.go:310] 
	I0403 18:13:01.009584   22244 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0403 18:13:01.009594   22244 kubeadm.go:310] 
	I0403 18:13:01.009696   22244 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nmwuj5.0kd5oiqnmr4nxh2a \
	I0403 18:13:01.009817   22244 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 
	I0403 18:13:01.010127   22244 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 18:13:01.010201   22244 cni.go:84] Creating CNI manager for ""
	I0403 18:13:01.010217   22244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 18:13:01.012368   22244 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0403 18:13:01.013352   22244 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0403 18:13:01.023695   22244 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0403 18:13:01.044659   22244 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0403 18:13:01.044742   22244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 18:13:01.044754   22244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-445082 minikube.k8s.io/updated_at=2025_04_03T18_13_01_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053 minikube.k8s.io/name=addons-445082 minikube.k8s.io/primary=true
	I0403 18:13:01.089664   22244 ops.go:34] apiserver oom_adj: -16
	I0403 18:13:01.184342   22244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 18:13:01.684843   22244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 18:13:02.185174   22244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 18:13:02.684776   22244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 18:13:03.185149   22244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 18:13:03.685103   22244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 18:13:04.185226   22244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 18:13:04.684483   22244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 18:13:04.772843   22244 kubeadm.go:1113] duration metric: took 3.728160904s to wait for elevateKubeSystemPrivileges
	I0403 18:13:04.772873   22244 kubeadm.go:394] duration metric: took 13.141618164s to StartCluster
	I0403 18:13:04.772890   22244 settings.go:142] acquiring lock: {Name:mk92384ef10350a2ea8b1710e0b74c72a0214398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:13:04.773037   22244 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 18:13:04.773450   22244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/kubeconfig: {Name:mk73cf36c30cb8628f68d205e8cb0818a9975b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:13:04.773653   22244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0403 18:13:04.773682   22244 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.130 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 18:13:04.773735   22244 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0403 18:13:04.773860   22244 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-445082"
	I0403 18:13:04.773891   22244 addons.go:69] Setting default-storageclass=true in profile "addons-445082"
	I0403 18:13:04.773893   22244 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-445082"
	I0403 18:13:04.773911   22244 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-445082"
	I0403 18:13:04.773913   22244 config.go:182] Loaded profile config "addons-445082": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 18:13:04.773923   22244 addons.go:69] Setting registry=true in profile "addons-445082"
	I0403 18:13:04.773852   22244 addons.go:69] Setting yakd=true in profile "addons-445082"
	I0403 18:13:04.773902   22244 addons.go:69] Setting storage-provisioner=true in profile "addons-445082"
	I0403 18:13:04.773961   22244 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-445082"
	I0403 18:13:04.773965   22244 addons.go:238] Setting addon storage-provisioner=true in "addons-445082"
	I0403 18:13:04.773972   22244 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-445082"
	I0403 18:13:04.773987   22244 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-445082"
	I0403 18:13:04.773991   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.773997   22244 addons.go:238] Setting addon registry=true in "addons-445082"
	I0403 18:13:04.774013   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.774028   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.774033   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.774429   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.774441   22244 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-445082"
	I0403 18:13:04.773953   22244 addons.go:238] Setting addon yakd=true in "addons-445082"
	I0403 18:13:04.774454   22244 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-445082"
	I0403 18:13:04.774451   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.774474   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.774481   22244 addons.go:69] Setting cloud-spanner=true in profile "addons-445082"
	I0403 18:13:04.774486   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.774492   22244 addons.go:238] Setting addon cloud-spanner=true in "addons-445082"
	I0403 18:13:04.774498   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.774506   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.774521   22244 addons.go:69] Setting ingress-dns=true in profile "addons-445082"
	I0403 18:13:04.774527   22244 addons.go:69] Setting inspektor-gadget=true in profile "addons-445082"
	I0403 18:13:04.774538   22244 addons.go:238] Setting addon ingress-dns=true in "addons-445082"
	I0403 18:13:04.774474   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.774546   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.774541   22244 addons.go:238] Setting addon inspektor-gadget=true in "addons-445082"
	I0403 18:13:04.774514   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.774608   22244 addons.go:69] Setting ingress=true in profile "addons-445082"
	I0403 18:13:04.774620   22244 addons.go:238] Setting addon ingress=true in "addons-445082"
	I0403 18:13:04.774512   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.774521   22244 addons.go:69] Setting gcp-auth=true in profile "addons-445082"
	I0403 18:13:04.774676   22244 mustload.go:65] Loading cluster: addons-445082
	I0403 18:13:04.774705   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.774763   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.774794   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.774813   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.774846   22244 config.go:182] Loaded profile config "addons-445082": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 18:13:04.774884   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.774911   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.774937   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.774955   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.775134   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.775156   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.775218   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.775238   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.775251   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.773870   22244 addons.go:69] Setting metrics-server=true in profile "addons-445082"
	I0403 18:13:04.775272   22244 addons.go:238] Setting addon metrics-server=true in "addons-445082"
	I0403 18:13:04.775273   22244 addons.go:69] Setting volcano=true in profile "addons-445082"
	I0403 18:13:04.775287   22244 addons.go:238] Setting addon volcano=true in "addons-445082"
	I0403 18:13:04.774433   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.775308   22244 addons.go:69] Setting volumesnapshots=true in profile "addons-445082"
	I0403 18:13:04.775312   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.775319   22244 addons.go:238] Setting addon volumesnapshots=true in "addons-445082"
	I0403 18:13:04.775341   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.775313   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.775658   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.775687   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.775731   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.775747   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.775769   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.775804   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.775833   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.775837   22244 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-445082"
	I0403 18:13:04.775907   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.776692   22244 out.go:177] * Verifying Kubernetes components...
	I0403 18:13:04.777993   22244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 18:13:04.795771   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44789
	I0403 18:13:04.795856   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0403 18:13:04.796024   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44735
	I0403 18:13:04.799276   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.799312   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.799321   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.799338   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.800266   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.800306   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.800757   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.800883   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.801458   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.801475   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.801611   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.801622   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.801904   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.801990   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.802625   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.802660   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.806549   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.808625   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.809016   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.809047   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.809546   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.810218   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.810233   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.810594   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.810974   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.815156   22244 addons.go:238] Setting addon default-storageclass=true in "addons-445082"
	I0403 18:13:04.815202   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.815575   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.815621   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.830992   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44261
	I0403 18:13:04.831593   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.832777   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.832803   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.833162   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0403 18:13:04.833389   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.833873   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.834262   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.834280   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.835125   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.835572   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.835604   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.836348   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.836391   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.841380   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0403 18:13:04.842013   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.842600   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.842619   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.843056   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.843622   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.843658   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.843854   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I0403 18:13:04.844034   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33773
	I0403 18:13:04.844456   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.844853   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.844871   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.845230   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.845751   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.845787   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.847099   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.847169   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41591
	I0403 18:13:04.847678   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.847820   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.847831   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.848815   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.848831   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.849228   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.849391   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.851454   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.852658   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
	I0403 18:13:04.852915   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.853423   22244 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0403 18:13:04.853473   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.853904   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.853925   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.854141   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40469
	I0403 18:13:04.854497   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.854710   22244 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0403 18:13:04.854731   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0403 18:13:04.854750   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.854891   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.854915   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.855004   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.855020   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.855247   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.855369   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.855947   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.855983   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.856597   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.856640   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.858083   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.858525   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.858551   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.858803   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.859403   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42579
	I0403 18:13:04.859518   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.859750   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.859914   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.860780   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.861424   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.861442   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.862367   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.862943   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.862992   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.869148   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43819
	I0403 18:13:04.870216   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.870734   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.870750   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.871119   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.871651   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.871677   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.873230   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41701
	I0403 18:13:04.873378   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42419
	I0403 18:13:04.876176   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43299
	I0403 18:13:04.876311   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0403 18:13:04.876684   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.876785   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.877300   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.877392   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.877407   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.877484   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.877499   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.877976   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.877993   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.878054   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44329
	I0403 18:13:04.878197   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.878268   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.878391   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.878588   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.878655   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.878876   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.878892   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.878991   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.879396   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.879433   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.880299   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.880334   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.881773   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.881789   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.881844   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.881884   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.882690   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.883103   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.883134   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.884562   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.884589   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.885288   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32953
	I0403 18:13:04.885757   22244 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0403 18:13:04.885785   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.886213   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.886229   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.887012   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.887148   22244 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0403 18:13:04.887168   22244 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0403 18:13:04.887174   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.887187   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.890164   22244 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-445082"
	I0403 18:13:04.890201   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:04.890547   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.890576   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.890768   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.890817   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43757
	I0403 18:13:04.891209   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.891227   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.891802   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.892016   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.892233   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.892457   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.893034   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.893624   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.893639   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.893698   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0403 18:13:04.894502   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.894686   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.895315   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I0403 18:13:04.895482   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.895954   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.896399   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.896422   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.896478   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.896611   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.896640   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.896788   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44607
	I0403 18:13:04.896967   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.897207   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.897274   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.897791   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.897812   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.898066   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.898161   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.899310   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.899502   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.900201   22244 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0403 18:13:04.900210   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.901425   22244 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0403 18:13:04.901556   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.901649   22244 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0403 18:13:04.901662   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0403 18:13:04.901679   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.903016   22244 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0403 18:13:04.903033   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0403 18:13:04.903050   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.903554   22244 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0403 18:13:04.904558   22244 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0403 18:13:04.904575   22244 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0403 18:13:04.904594   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.907561   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39285
	I0403 18:13:04.908330   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.908421   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44633
	I0403 18:13:04.909365   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.909453   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.909473   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.909492   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.909551   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.909566   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.909676   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.909830   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.909890   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.910299   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.910307   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.910355   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0403 18:13:04.910454   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.910465   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.910736   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.910874   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.910975   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.911168   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.912448   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.912505   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.912721   22244 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0403 18:13:04.912736   22244 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0403 18:13:04.912949   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.913570   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.913589   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.914054   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.914474   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.914617   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.914629   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.914697   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42845
	I0403 18:13:04.914852   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.915690   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.915731   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.915798   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.916172   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.916947   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.916967   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.917259   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.917582   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.917638   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.917804   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.918224   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.918278   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.919364   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.919482   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.919556   22244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0403 18:13:04.919953   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.919969   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.920304   22244 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 18:13:04.920593   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39171
	I0403 18:13:04.920411   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.920439   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.920774   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.920482   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.920936   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.921052   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.921093   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.921693   22244 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 18:13:04.921706   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0403 18:13:04.921721   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.921741   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.921255   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.921769   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.921948   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.922342   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.922503   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.922522   22244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0403 18:13:04.923647   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.923949   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:04.923960   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:04.924028   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37899
	I0403 18:13:04.924160   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:04.924168   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:04.924176   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:04.924182   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:04.924438   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:04.924457   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:04.924469   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	W0403 18:13:04.924525   22244 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0403 18:13:04.924650   22244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0403 18:13:04.925013   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.925547   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.925562   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.925620   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I0403 18:13:04.925992   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.926591   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.926667   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.926937   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.927014   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.927262   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.927074   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.927408   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.927434   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34729
	I0403 18:13:04.927443   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.927589   22244 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0403 18:13:04.927604   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.927638   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.928011   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.928237   22244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0403 18:13:04.928572   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.928372   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.928539   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.928749   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.928913   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.928993   22244 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0403 18:13:04.929006   22244 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0403 18:13:04.929021   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.929071   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.929526   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.929560   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.930579   22244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0403 18:13:04.930632   22244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0403 18:13:04.930936   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.931101   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.932297   22244 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0403 18:13:04.932364   22244 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0403 18:13:04.932450   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.932861   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.932899   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.933057   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.933112   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.933223   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.933364   22244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0403 18:13:04.933407   22244 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0403 18:13:04.933422   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0403 18:13:04.933437   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.933488   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.933695   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.934882   22244 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0403 18:13:04.934926   22244 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0403 18:13:04.936204   22244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0403 18:13:04.936294   22244 out.go:177]   - Using image docker.io/registry:2.8.3
	I0403 18:13:04.936878   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.937219   22244 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0403 18:13:04.937263   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.937281   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.937378   22244 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0403 18:13:04.937386   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0403 18:13:04.937397   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.937536   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.937660   22244 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0403 18:13:04.937670   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0403 18:13:04.937670   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.937680   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.937802   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.937916   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.938399   22244 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0403 18:13:04.938415   22244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0403 18:13:04.938429   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.941583   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.942359   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.942403   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.942422   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.942592   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.942657   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.942785   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.942876   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.942897   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.942932   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.943012   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.943213   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.943214   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.943230   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.943259   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.943464   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I0403 18:13:04.943594   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.943808   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.943920   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.943955   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.943992   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.944336   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.944353   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.944441   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.944602   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.945006   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.945718   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:04.945763   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:04.945972   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I0403 18:13:04.946406   22244 main.go:141] libmachine: () Calling .GetVersion
	W0403 18:13:04.946773   22244 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44772->192.168.39.130:22: read: connection reset by peer
	I0403 18:13:04.946804   22244 retry.go:31] will retry after 263.42623ms: ssh: handshake failed: read tcp 192.168.39.1:44772->192.168.39.130:22: read: connection reset by peer
	I0403 18:13:04.946998   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.947019   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.947373   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.947531   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.949309   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.951051   22244 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0403 18:13:04.952131   22244 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0403 18:13:04.952147   22244 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0403 18:13:04.952164   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.954815   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.955301   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.955320   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.955482   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.955656   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.955814   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.956011   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:04.963201   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I0403 18:13:04.963584   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:04.963994   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:04.964011   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:04.964388   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:04.964567   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:04.966242   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:04.967464   22244 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0403 18:13:04.968409   22244 out.go:177]   - Using image docker.io/busybox:stable
	I0403 18:13:04.969333   22244 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0403 18:13:04.969351   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0403 18:13:04.969374   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:04.972315   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.972697   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:04.972715   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:04.972893   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:04.973031   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:04.973164   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:04.973295   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:05.126153   22244 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 18:13:05.126200   22244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0403 18:13:05.153456   22244 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0403 18:13:05.153474   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0403 18:13:05.204331   22244 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0403 18:13:05.204363   22244 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0403 18:13:05.220374   22244 node_ready.go:35] waiting up to 6m0s for node "addons-445082" to be "Ready" ...
	I0403 18:13:05.222674   22244 node_ready.go:49] node "addons-445082" has status "Ready":"True"
	I0403 18:13:05.222690   22244 node_ready.go:38] duration metric: took 2.289731ms for node "addons-445082" to be "Ready" ...
	I0403 18:13:05.222700   22244 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 18:13:05.226262   22244 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-445082" in "kube-system" namespace to be "Ready" ...
	I0403 18:13:05.263064   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0403 18:13:05.343087   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0403 18:13:05.370016   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0403 18:13:05.371068   22244 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0403 18:13:05.371092   22244 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0403 18:13:05.377214   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0403 18:13:05.400121   22244 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0403 18:13:05.400145   22244 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0403 18:13:05.420945   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0403 18:13:05.432481   22244 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0403 18:13:05.432505   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0403 18:13:05.442980   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 18:13:05.465222   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0403 18:13:05.512552   22244 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0403 18:13:05.512577   22244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0403 18:13:05.515133   22244 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0403 18:13:05.515153   22244 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0403 18:13:05.545387   22244 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0403 18:13:05.545412   22244 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0403 18:13:05.594442   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0403 18:13:05.617358   22244 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0403 18:13:05.617382   22244 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0403 18:13:05.661834   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0403 18:13:05.712200   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0403 18:13:05.715084   22244 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0403 18:13:05.715105   22244 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0403 18:13:05.755380   22244 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0403 18:13:05.755407   22244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0403 18:13:05.779241   22244 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0403 18:13:05.779262   22244 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0403 18:13:05.885403   22244 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0403 18:13:05.885428   22244 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0403 18:13:05.946655   22244 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0403 18:13:05.946681   22244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0403 18:13:05.968421   22244 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0403 18:13:05.968444   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0403 18:13:05.986427   22244 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0403 18:13:05.986450   22244 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0403 18:13:06.091177   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0403 18:13:06.137381   22244 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0403 18:13:06.137409   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0403 18:13:06.220038   22244 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0403 18:13:06.220075   22244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0403 18:13:06.284989   22244 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0403 18:13:06.285011   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0403 18:13:06.440231   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0403 18:13:06.473107   22244 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0403 18:13:06.473140   22244 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0403 18:13:06.578984   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0403 18:13:06.668993   22244 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0403 18:13:06.669019   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0403 18:13:06.948771   22244 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0403 18:13:06.948809   22244 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0403 18:13:07.234117   22244 pod_ready.go:103] pod "etcd-addons-445082" in "kube-system" namespace has status "Ready":"False"
	I0403 18:13:07.258110   22244 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0403 18:13:07.258134   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0403 18:13:07.414791   22244 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.288561065s)
	I0403 18:13:07.414835   22244 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0403 18:13:07.452552   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.109428105s)
	I0403 18:13:07.452585   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.082535852s)
	I0403 18:13:07.452611   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:07.452624   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:07.452627   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:07.452635   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:07.452898   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:07.452911   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:07.452919   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:07.452926   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:07.453013   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:07.453023   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:07.453031   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:07.453038   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:07.453158   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:07.453176   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:07.453194   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:07.453296   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:07.453316   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.190226869s)
	I0403 18:13:07.453326   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:07.453336   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:07.453340   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:07.453344   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:07.453548   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:07.453562   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:07.453570   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:07.453578   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:07.455136   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:07.455175   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:07.455189   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:07.560611   22244 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0403 18:13:07.560640   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0403 18:13:07.787315   22244 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0403 18:13:07.787339   22244 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0403 18:13:07.878808   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0403 18:13:07.918446   22244 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-445082" context rescaled to 1 replicas
	I0403 18:13:09.216397   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.839146255s)
	I0403 18:13:09.216454   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:09.216468   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:09.216503   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.795523255s)
	I0403 18:13:09.216544   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:09.216560   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:09.216740   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:09.216755   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:09.216764   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:09.216772   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:09.216858   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:09.216875   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:09.216888   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:09.216902   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:09.216912   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:09.216971   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:09.217078   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:09.217187   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:09.217209   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:09.217221   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:09.295243   22244 pod_ready.go:103] pod "etcd-addons-445082" in "kube-system" namespace has status "Ready":"False"
	I0403 18:13:09.321198   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:09.321225   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:09.321527   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:09.321543   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:09.505857   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.062836402s)
	I0403 18:13:09.505909   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:09.505920   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:09.505945   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.040688449s)
	I0403 18:13:09.505990   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:09.506008   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:09.506243   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:09.506287   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:09.506310   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:09.506313   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:09.506326   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:09.506329   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:09.506336   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:09.506349   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:09.506579   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:09.506607   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:09.506618   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:09.506622   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:09.506628   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:09.579951   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:09.579975   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:09.580342   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:09.580369   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:09.580386   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:11.725777   22244 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0403 18:13:11.725815   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:11.729098   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:11.729549   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:11.729582   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:11.729741   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:11.729913   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:11.730092   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:11.730209   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:11.761415   22244 pod_ready.go:103] pod "etcd-addons-445082" in "kube-system" namespace has status "Ready":"False"
	I0403 18:13:12.208968   22244 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0403 18:13:12.236800   22244 pod_ready.go:93] pod "etcd-addons-445082" in "kube-system" namespace has status "Ready":"True"
	I0403 18:13:12.236823   22244 pod_ready.go:82] duration metric: took 7.010540151s for pod "etcd-addons-445082" in "kube-system" namespace to be "Ready" ...
	I0403 18:13:12.236834   22244 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-445082" in "kube-system" namespace to be "Ready" ...
	I0403 18:13:12.300254   22244 addons.go:238] Setting addon gcp-auth=true in "addons-445082"
	I0403 18:13:12.300313   22244 host.go:66] Checking if "addons-445082" exists ...
	I0403 18:13:12.300675   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:12.300711   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:12.316341   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36655
	I0403 18:13:12.316729   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:12.317095   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:12.317115   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:12.317410   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:12.317957   22244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:13:12.317988   22244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:13:12.332979   22244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
	I0403 18:13:12.333474   22244 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:13:12.334004   22244 main.go:141] libmachine: Using API Version  1
	I0403 18:13:12.334020   22244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:13:12.334365   22244 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:13:12.334558   22244 main.go:141] libmachine: (addons-445082) Calling .GetState
	I0403 18:13:12.336088   22244 main.go:141] libmachine: (addons-445082) Calling .DriverName
	I0403 18:13:12.336340   22244 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0403 18:13:12.336363   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHHostname
	I0403 18:13:12.339086   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:12.339526   22244 main.go:141] libmachine: (addons-445082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:df:ce", ip: ""} in network mk-addons-445082: {Iface:virbr1 ExpiryTime:2025-04-03 19:12:35 +0000 UTC Type:0 Mac:52:54:00:e7:df:ce Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:addons-445082 Clientid:01:52:54:00:e7:df:ce}
	I0403 18:13:12.339558   22244 main.go:141] libmachine: (addons-445082) DBG | domain addons-445082 has defined IP address 192.168.39.130 and MAC address 52:54:00:e7:df:ce in network mk-addons-445082
	I0403 18:13:12.339701   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHPort
	I0403 18:13:12.339900   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHKeyPath
	I0403 18:13:12.340051   22244 main.go:141] libmachine: (addons-445082) Calling .GetSSHUsername
	I0403 18:13:12.340186   22244 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/addons-445082/id_rsa Username:docker}
	I0403 18:13:12.822661   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.22817814s)
	I0403 18:13:12.822717   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:12.822718   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.160848153s)
	I0403 18:13:12.822729   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:12.822774   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:12.822791   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:12.822797   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.110565699s)
	I0403 18:13:12.822832   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:12.822848   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:12.822877   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.731672601s)
	I0403 18:13:12.822897   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:12.822908   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.382650008s)
	I0403 18:13:12.822925   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:12.822949   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:12.822911   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:12.823018   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.243997088s)
	W0403 18:13:12.823043   22244 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0403 18:13:12.823075   22244 retry.go:31] will retry after 333.204621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0403 18:13:12.823189   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:12.823203   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:12.823203   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:12.823211   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:12.823211   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:12.823219   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:12.823221   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:12.823230   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:12.823236   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:12.823246   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:12.823254   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:12.823261   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:12.823189   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:12.823268   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:12.823297   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:12.823304   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:12.823311   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:12.823316   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:12.823237   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:12.825108   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:12.825122   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:12.825128   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:12.825140   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:12.825143   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:12.825151   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:12.825151   22244 addons.go:479] Verifying addon registry=true in "addons-445082"
	I0403 18:13:12.825152   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:12.825316   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:12.825338   22244 addons.go:479] Verifying addon metrics-server=true in "addons-445082"
	I0403 18:13:12.825356   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:12.825393   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:12.825404   22244 addons.go:479] Verifying addon ingress=true in "addons-445082"
	I0403 18:13:12.825378   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:12.825162   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:12.825598   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:12.826390   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:12.826421   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:12.826428   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:12.826432   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:12.826461   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:12.826620   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:12.826582   22244 out.go:177] * Verifying registry addon...
	I0403 18:13:12.827284   22244 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-445082 service yakd-dashboard -n yakd-dashboard
	
	I0403 18:13:12.827292   22244 out.go:177] * Verifying ingress addon...
	I0403 18:13:12.828951   22244 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0403 18:13:12.829455   22244 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0403 18:13:12.846283   22244 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0403 18:13:12.846302   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:12.846311   22244 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0403 18:13:12.846326   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:13.156659   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0403 18:13:13.333789   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:13.333816   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:13.832706   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:13.936341   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:14.213251   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.334346349s)
	I0403 18:13:14.213310   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:14.213324   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:14.213319   22244 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.876954525s)
	I0403 18:13:14.213608   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:14.213625   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:14.213634   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:14.213643   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:14.213894   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:14.213908   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:14.213918   22244 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-445082"
	I0403 18:13:14.214748   22244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0403 18:13:14.215621   22244 out.go:177] * Verifying csi-hostpath-driver addon...
	I0403 18:13:14.217226   22244 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0403 18:13:14.217799   22244 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0403 18:13:14.218362   22244 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0403 18:13:14.218383   22244 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0403 18:13:14.257984   22244 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0403 18:13:14.258012   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:14.282101   22244 pod_ready.go:103] pod "kube-apiserver-addons-445082" in "kube-system" namespace has status "Ready":"False"
	I0403 18:13:14.342646   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:14.342693   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:14.359008   22244 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0403 18:13:14.359041   22244 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0403 18:13:14.413960   22244 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0403 18:13:14.413981   22244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0403 18:13:14.538574   22244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0403 18:13:14.722433   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:14.742214   22244 pod_ready.go:93] pod "kube-apiserver-addons-445082" in "kube-system" namespace has status "Ready":"True"
	I0403 18:13:14.742233   22244 pod_ready.go:82] duration metric: took 2.505391887s for pod "kube-apiserver-addons-445082" in "kube-system" namespace to be "Ready" ...
	I0403 18:13:14.742242   22244 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-445082" in "kube-system" namespace to be "Ready" ...
	I0403 18:13:14.750601   22244 pod_ready.go:93] pod "kube-controller-manager-addons-445082" in "kube-system" namespace has status "Ready":"True"
	I0403 18:13:14.750623   22244 pod_ready.go:82] duration metric: took 8.373816ms for pod "kube-controller-manager-addons-445082" in "kube-system" namespace to be "Ready" ...
	I0403 18:13:14.750635   22244 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-445082" in "kube-system" namespace to be "Ready" ...
	I0403 18:13:14.757742   22244 pod_ready.go:93] pod "kube-scheduler-addons-445082" in "kube-system" namespace has status "Ready":"True"
	I0403 18:13:14.757760   22244 pod_ready.go:82] duration metric: took 7.115036ms for pod "kube-scheduler-addons-445082" in "kube-system" namespace to be "Ready" ...
	I0403 18:13:14.757769   22244 pod_ready.go:39] duration metric: took 9.535056277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 18:13:14.757791   22244 api_server.go:52] waiting for apiserver process to appear ...
	I0403 18:13:14.757845   22244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 18:13:14.832985   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:14.833429   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:15.006856   22244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.850116054s)
	I0403 18:13:15.006917   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:15.006934   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:15.007180   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:15.007229   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:15.007242   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:15.007256   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:15.007268   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:15.007473   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:15.007500   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:15.007478   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:15.226667   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:15.378669   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:15.378853   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:15.403711   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:15.403735   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:15.403750   22244 api_server.go:72] duration metric: took 10.630031537s to wait for apiserver process to appear ...
	I0403 18:13:15.403770   22244 api_server.go:88] waiting for apiserver healthz status ...
	I0403 18:13:15.403791   22244 api_server.go:253] Checking apiserver healthz at https://192.168.39.130:8443/healthz ...
	I0403 18:13:15.403995   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:15.404016   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:15.404017   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:15.404025   22244 main.go:141] libmachine: Making call to close driver server
	I0403 18:13:15.404044   22244 main.go:141] libmachine: (addons-445082) Calling .Close
	I0403 18:13:15.404281   22244 main.go:141] libmachine: Successfully made call to close driver server
	I0403 18:13:15.404296   22244 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 18:13:15.404306   22244 main.go:141] libmachine: (addons-445082) DBG | Closing plugin on server side
	I0403 18:13:15.405127   22244 addons.go:479] Verifying addon gcp-auth=true in "addons-445082"
	I0403 18:13:15.406489   22244 out.go:177] * Verifying gcp-auth addon...
	I0403 18:13:15.408029   22244 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0403 18:13:15.411423   22244 api_server.go:279] https://192.168.39.130:8443/healthz returned 200:
	ok
	I0403 18:13:15.412477   22244 api_server.go:141] control plane version: v1.32.2
	I0403 18:13:15.412493   22244 api_server.go:131] duration metric: took 8.71715ms to wait for apiserver health ...
	I0403 18:13:15.412500   22244 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 18:13:15.428020   22244 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0403 18:13:15.428041   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:15.429086   22244 system_pods.go:59] 19 kube-system pods found
	I0403 18:13:15.429117   22244 system_pods.go:61] "amd-gpu-device-plugin-7p29k" [3d39cd37-af72-4a14-8531-1f66cd7238c7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0403 18:13:15.429124   22244 system_pods.go:61] "coredns-668d6bf9bc-78bx4" [7330318b-714b-4090-b6cd-29d5cd13118c] Running
	I0403 18:13:15.429134   22244 system_pods.go:61] "coredns-668d6bf9bc-gr7kv" [fd4f369c-0b9c-4eaa-b8bf-597c29bb0d1a] Running
	I0403 18:13:15.429142   22244 system_pods.go:61] "csi-hostpath-attacher-0" [8bde948e-a992-4991-8173-9ff227977bcb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0403 18:13:15.429159   22244 system_pods.go:61] "csi-hostpath-resizer-0" [94934d47-449b-408c-b061-66fdbf83e151] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0403 18:13:15.429176   22244 system_pods.go:61] "csi-hostpathplugin-6449c" [27e36dc8-0ccf-44f6-8f56-e8da2e37cfb3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0403 18:13:15.429182   22244 system_pods.go:61] "etcd-addons-445082" [5207eb9c-5816-47f3-bd93-fae19a9665ae] Running
	I0403 18:13:15.429194   22244 system_pods.go:61] "kube-apiserver-addons-445082" [69292ddc-22ae-4757-b6ef-e79adba6da96] Running
	I0403 18:13:15.429203   22244 system_pods.go:61] "kube-controller-manager-addons-445082" [a4629e32-a6ff-445b-ada8-1b44e584a6ba] Running
	I0403 18:13:15.429211   22244 system_pods.go:61] "kube-ingress-dns-minikube" [33c57015-07a6-47fa-bc0f-bcc2e7195ec5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0403 18:13:15.429224   22244 system_pods.go:61] "kube-proxy-bkjxx" [e60927c3-2084-4e4d-b850-53787b6f9165] Running
	I0403 18:13:15.429231   22244 system_pods.go:61] "kube-scheduler-addons-445082" [e7ae03cd-8e8e-4b29-a728-cd0af52b8ed0] Running
	I0403 18:13:15.429244   22244 system_pods.go:61] "metrics-server-7fbb699795-6zvlq" [fe49f00e-6163-4052-914d-5a02c1f44677] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0403 18:13:15.429255   22244 system_pods.go:61] "nvidia-device-plugin-daemonset-d5kr9" [84407566-6cac-4282-8a90-7dc046450e7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0403 18:13:15.429265   22244 system_pods.go:61] "registry-6c88467877-f7fn6" [db35ba87-f90e-477b-a105-2bde628b1715] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0403 18:13:15.429275   22244 system_pods.go:61] "registry-proxy-gnl8c" [9d95bda0-c392-473d-b05c-01546c40ea02] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0403 18:13:15.429286   22244 system_pods.go:61] "snapshot-controller-68b874b76f-c7gk8" [4b43e156-617e-401b-a9ec-d1e9aead78ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0403 18:13:15.429296   22244 system_pods.go:61] "snapshot-controller-68b874b76f-hnwcg" [c28389f7-facf-45b1-844a-ab1662d8500b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0403 18:13:15.429304   22244 system_pods.go:61] "storage-provisioner" [0eddf727-033f-41ff-b2f9-c29be537cb75] Running
	I0403 18:13:15.429313   22244 system_pods.go:74] duration metric: took 16.806649ms to wait for pod list to return data ...
	I0403 18:13:15.429326   22244 default_sa.go:34] waiting for default service account to be created ...
	I0403 18:13:15.443168   22244 default_sa.go:45] found service account: "default"
	I0403 18:13:15.443200   22244 default_sa.go:55] duration metric: took 13.864812ms for default service account to be created ...
	I0403 18:13:15.443211   22244 system_pods.go:116] waiting for k8s-apps to be running ...
	I0403 18:13:15.476022   22244 system_pods.go:86] 19 kube-system pods found
	I0403 18:13:15.476061   22244 system_pods.go:89] "amd-gpu-device-plugin-7p29k" [3d39cd37-af72-4a14-8531-1f66cd7238c7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0403 18:13:15.476069   22244 system_pods.go:89] "coredns-668d6bf9bc-78bx4" [7330318b-714b-4090-b6cd-29d5cd13118c] Running
	I0403 18:13:15.476085   22244 system_pods.go:89] "coredns-668d6bf9bc-gr7kv" [fd4f369c-0b9c-4eaa-b8bf-597c29bb0d1a] Running
	I0403 18:13:15.476093   22244 system_pods.go:89] "csi-hostpath-attacher-0" [8bde948e-a992-4991-8173-9ff227977bcb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0403 18:13:15.476101   22244 system_pods.go:89] "csi-hostpath-resizer-0" [94934d47-449b-408c-b061-66fdbf83e151] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0403 18:13:15.476123   22244 system_pods.go:89] "csi-hostpathplugin-6449c" [27e36dc8-0ccf-44f6-8f56-e8da2e37cfb3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0403 18:13:15.476137   22244 system_pods.go:89] "etcd-addons-445082" [5207eb9c-5816-47f3-bd93-fae19a9665ae] Running
	I0403 18:13:15.476144   22244 system_pods.go:89] "kube-apiserver-addons-445082" [69292ddc-22ae-4757-b6ef-e79adba6da96] Running
	I0403 18:13:15.476150   22244 system_pods.go:89] "kube-controller-manager-addons-445082" [a4629e32-a6ff-445b-ada8-1b44e584a6ba] Running
	I0403 18:13:15.476163   22244 system_pods.go:89] "kube-ingress-dns-minikube" [33c57015-07a6-47fa-bc0f-bcc2e7195ec5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0403 18:13:15.476169   22244 system_pods.go:89] "kube-proxy-bkjxx" [e60927c3-2084-4e4d-b850-53787b6f9165] Running
	I0403 18:13:15.476174   22244 system_pods.go:89] "kube-scheduler-addons-445082" [e7ae03cd-8e8e-4b29-a728-cd0af52b8ed0] Running
	I0403 18:13:15.476193   22244 system_pods.go:89] "metrics-server-7fbb699795-6zvlq" [fe49f00e-6163-4052-914d-5a02c1f44677] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0403 18:13:15.476203   22244 system_pods.go:89] "nvidia-device-plugin-daemonset-d5kr9" [84407566-6cac-4282-8a90-7dc046450e7c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0403 18:13:15.476213   22244 system_pods.go:89] "registry-6c88467877-f7fn6" [db35ba87-f90e-477b-a105-2bde628b1715] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0403 18:13:15.476227   22244 system_pods.go:89] "registry-proxy-gnl8c" [9d95bda0-c392-473d-b05c-01546c40ea02] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0403 18:13:15.476235   22244 system_pods.go:89] "snapshot-controller-68b874b76f-c7gk8" [4b43e156-617e-401b-a9ec-d1e9aead78ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0403 18:13:15.476249   22244 system_pods.go:89] "snapshot-controller-68b874b76f-hnwcg" [c28389f7-facf-45b1-844a-ab1662d8500b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0403 18:13:15.476255   22244 system_pods.go:89] "storage-provisioner" [0eddf727-033f-41ff-b2f9-c29be537cb75] Running
	I0403 18:13:15.476264   22244 system_pods.go:126] duration metric: took 33.046276ms to wait for k8s-apps to be running ...
	I0403 18:13:15.476272   22244 system_svc.go:44] waiting for kubelet service to be running ....
	I0403 18:13:15.476322   22244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 18:13:15.502382   22244 system_svc.go:56] duration metric: took 26.098956ms WaitForService to wait for kubelet
	I0403 18:13:15.502417   22244 kubeadm.go:582] duration metric: took 10.728701012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 18:13:15.502442   22244 node_conditions.go:102] verifying NodePressure condition ...
	I0403 18:13:15.505260   22244 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 18:13:15.505288   22244 node_conditions.go:123] node cpu capacity is 2
	I0403 18:13:15.505300   22244 node_conditions.go:105] duration metric: took 2.85325ms to run NodePressure ...
	I0403 18:13:15.505314   22244 start.go:241] waiting for startup goroutines ...
	I0403 18:13:15.723057   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:15.831893   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:15.833429   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:15.911646   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:16.222725   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:16.333093   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:16.334089   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:16.410932   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:16.721379   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:16.832435   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:16.832769   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:16.911341   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:17.222088   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:17.333243   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:17.333708   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:17.411747   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:17.720597   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:17.832645   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:17.832800   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:17.911240   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:18.221910   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:18.334984   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:18.335167   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:18.410563   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:18.722460   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:18.833044   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:18.834032   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:18.910480   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:19.221998   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:19.332796   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:19.332843   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:19.412376   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:19.721355   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:19.833450   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:19.833476   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:19.911405   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:20.221353   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:20.332908   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:20.332960   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:20.411758   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:20.721515   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:20.832843   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:20.833259   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:20.910829   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:21.221173   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:21.332453   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:21.333101   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:21.410196   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:21.721572   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:21.832179   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:21.832766   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:21.911388   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:22.221680   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:22.332471   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:22.332528   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:22.411041   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:22.721225   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:22.832694   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:22.832783   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:22.911348   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:23.222197   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:23.331992   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:23.332892   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:23.412611   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:23.721804   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:23.833044   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:23.833539   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:23.911143   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:24.221058   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:24.332323   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:24.332342   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:24.410896   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:24.721061   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:24.833391   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:24.833619   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:24.910867   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:25.220949   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:25.333405   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:25.333813   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:25.411257   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:25.721735   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:25.832858   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:25.833309   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:25.910959   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:26.221269   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:26.333654   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:26.333832   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:26.411755   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:26.720821   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:26.833085   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:26.833203   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:26.911122   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:27.221336   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:27.343371   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:27.343618   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:27.411486   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:27.838096   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:27.838460   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:27.839814   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:27.911396   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:28.221396   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:28.333301   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:28.333356   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:28.410681   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:28.720878   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:28.832565   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:28.832597   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:28.911103   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:29.221694   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:29.333519   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:29.333539   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:29.721815   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:29.721912   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:29.833158   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:29.833194   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:29.933072   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:30.221398   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:30.332258   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:30.333501   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:30.411386   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:30.721891   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:30.833437   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:30.833594   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:30.911119   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:31.727367   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:31.727562   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:31.727987   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:31.732035   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:31.732760   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:31.832870   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:31.832912   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:31.911740   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:32.220676   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:32.333071   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:32.333583   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:32.410870   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:32.721464   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:32.833093   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:32.833280   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:32.911552   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:33.221329   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:33.332094   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:33.332276   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:33.410445   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:33.721831   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:33.832821   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:33.832825   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:33.912615   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:34.222099   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:34.331927   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:34.332135   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:34.411128   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:34.721850   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:34.833325   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:34.833482   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:34.911978   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:35.221051   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:35.332567   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:35.333204   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:35.410366   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:35.722082   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:35.831779   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:35.833432   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:35.910777   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:36.220738   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:36.332399   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:36.332983   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:36.411444   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:36.721544   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:36.832504   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:36.832802   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:36.911333   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:37.221764   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:37.332442   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:37.332815   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:37.411252   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:37.721730   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:37.832396   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:37.832556   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:37.910969   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:38.221055   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:38.332765   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:38.333320   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:38.411466   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:38.721626   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:38.832280   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:38.832851   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:38.911577   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:39.220697   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:39.333562   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:39.333711   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:39.411371   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:39.720574   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:39.832071   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:39.833430   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:39.911504   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:40.221841   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:40.333283   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:40.333994   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:40.411335   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:40.721951   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:40.832698   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:40.832716   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:40.911070   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:41.222307   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:41.332700   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:41.333960   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:41.411419   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:41.721765   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:41.832449   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:41.833316   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:41.911658   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:42.327221   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:42.331277   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:42.333732   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:42.411625   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:42.720813   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:42.833979   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:42.834197   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:42.911469   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:43.221566   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:43.332209   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:43.332258   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:43.411445   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:43.721638   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:43.832720   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:43.832792   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:43.911144   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:44.221738   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:44.336802   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:44.337174   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:44.410794   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:44.721674   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:44.831914   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:44.833616   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:44.911547   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:45.221733   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:45.333021   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:45.334096   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:45.410585   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:45.721801   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:45.833021   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:45.833121   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:45.910648   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:46.221547   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:46.332376   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:46.333210   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:46.411139   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:46.721160   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:46.831714   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:46.832812   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:46.911841   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:47.678087   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:47.678178   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:47.678474   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:47.678545   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:47.723259   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:47.836058   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:47.836122   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:47.934985   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:48.221282   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:48.333037   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:48.333237   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:48.411165   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:48.721774   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:48.833056   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:48.833239   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:48.915559   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:49.224901   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:49.332382   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:49.333210   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:49.412250   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:49.724134   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:49.833513   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:49.834523   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:49.911852   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:50.223268   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:50.333531   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:50.333704   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:50.434917   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:50.723001   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:50.833798   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:50.833903   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:50.913496   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:51.222020   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:51.333060   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:51.333244   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:51.410627   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:51.720859   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:51.832710   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:51.832849   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:51.914323   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:52.223153   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:52.332902   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:52.333049   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:52.415592   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:52.721959   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:52.833004   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:52.833415   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:52.910946   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:53.221413   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:53.333424   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:53.333449   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:53.434608   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:53.720675   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:53.832513   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:53.832758   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:53.916037   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:54.221536   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:54.332479   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:54.333388   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:54.410977   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:54.722012   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:54.832920   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:54.833067   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:54.911295   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:55.221798   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:55.333025   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:55.333216   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:55.411114   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:55.721905   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:55.832996   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:55.834199   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:55.911013   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:56.221487   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:56.332233   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:56.332844   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:56.415249   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:56.721694   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:56.832842   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:56.833087   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:56.910844   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:57.221706   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:57.332585   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:57.333416   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:57.411082   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:57.721637   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:57.832920   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:57.833201   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:57.910479   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:58.221818   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:58.335108   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:58.335139   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:58.410441   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:58.722036   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:58.833755   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:58.834355   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:58.910992   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:59.221092   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:59.332764   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:59.332784   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:59.414506   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:13:59.720597   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:13:59.832198   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:13:59.832385   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:13:59.911301   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:00.221739   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:00.334201   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:00.334343   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:14:00.411106   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:00.721940   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:00.832877   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0403 18:14:00.833036   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:00.912225   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:01.221971   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:01.331643   22244 kapi.go:107] duration metric: took 48.502691168s to wait for kubernetes.io/minikube-addons=registry ...
	I0403 18:14:01.333213   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:01.410805   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:01.721191   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:01.832785   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:01.911468   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:02.222472   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:02.333429   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:02.411247   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:02.724130   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:02.833053   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:03.323243   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:03.323425   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:03.332976   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:03.411679   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:03.720736   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:03.832362   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:03.910868   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:04.221564   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:04.332619   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:04.411967   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:04.724548   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:04.833482   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:04.912404   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:05.222616   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:05.333760   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:05.412408   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:05.721622   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:05.832779   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:05.911665   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:06.222777   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:06.335021   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:06.414616   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:06.722117   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:06.833001   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:06.917003   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:07.221675   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:07.332768   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:07.410984   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:07.721320   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:07.833409   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:07.910783   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:08.221715   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:08.333082   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:08.412093   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:08.721839   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:08.834999   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:08.935568   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:09.222374   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:09.334146   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:09.412830   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:09.721380   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:09.832814   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:09.911305   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:10.226133   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:10.333401   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:10.434460   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:10.722093   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:10.837459   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:10.911108   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:11.237660   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:11.340824   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:11.411490   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:11.723564   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:11.832223   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:11.911762   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:12.221009   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:12.332769   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:12.411411   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:12.722068   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:12.833380   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:12.910857   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:13.221496   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:13.334495   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:13.411391   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:13.721981   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:13.833110   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:13.911529   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:14.221847   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:14.332965   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:14.411856   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:15.160395   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:15.162159   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:15.163198   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:15.224159   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:15.343363   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:15.414509   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:15.721916   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:15.833060   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:15.911697   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:16.221207   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:16.333204   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:16.410524   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:16.722258   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:16.834158   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:16.935146   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:17.221829   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:17.332293   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:17.410626   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:17.721182   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:17.833344   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:17.934157   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:18.221687   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:18.333151   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:18.411891   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:18.721085   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:18.832990   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:18.911975   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:19.221832   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:19.333212   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:19.413931   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:19.721929   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:19.834049   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:19.911625   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:20.221887   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:20.333546   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:20.412503   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:20.723749   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:20.833918   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:20.911705   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:21.220989   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:21.333123   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:21.410454   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:21.951086   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:21.951122   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:21.951553   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:22.221803   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:22.332569   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:22.411069   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:22.723670   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:22.832564   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:22.911272   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:23.221924   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:23.333161   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:23.410489   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:23.844301   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:23.844425   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:23.911613   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:24.220704   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:24.332572   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:24.411100   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:24.722775   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:24.834228   22244 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0403 18:14:24.910655   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:25.250197   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:25.334721   22244 kapi.go:107] duration metric: took 1m12.505264414s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0403 18:14:25.411101   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:25.721034   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:25.910975   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:26.335529   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:26.411152   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:26.722838   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:26.911710   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:27.220907   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:27.412060   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:27.721275   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:27.910962   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:28.221989   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:28.411846   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:28.721810   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:28.911447   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:29.227344   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:29.411379   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:29.720922   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:29.910706   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:30.221269   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:30.411126   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:30.721241   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:30.911321   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:31.221326   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:31.411070   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:31.722167   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:31.911102   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0403 18:14:32.221163   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:32.413462   22244 kapi.go:107] duration metric: took 1m17.005428405s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0403 18:14:32.415128   22244 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-445082 cluster.
	I0403 18:14:32.416420   22244 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0403 18:14:32.417575   22244 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0403 18:14:32.721958   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:33.221706   22244 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0403 18:14:33.722186   22244 kapi.go:107] duration metric: took 1m19.504385411s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0403 18:14:33.723861   22244 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, storage-provisioner, default-storageclass, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0403 18:14:33.725074   22244 addons.go:514] duration metric: took 1m28.951338717s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher storage-provisioner default-storageclass metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0403 18:14:33.725113   22244 start.go:246] waiting for cluster config update ...
	I0403 18:14:33.725135   22244 start.go:255] writing updated cluster config ...
	I0403 18:14:33.725441   22244 ssh_runner.go:195] Run: rm -f paused
	I0403 18:14:33.776297   22244 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0403 18:14:33.777992   22244 out.go:177] * Done! kubectl is now configured to use "addons-445082" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.339757582Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743704295339730591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3183dbf6-d998-4396-aac6-96fe657dcb26 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.340531792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b53fbb1-f881-47f4-bd83-1a71f8e34b1e name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.340599328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b53fbb1-f881-47f4-bd83-1a71f8e34b1e name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.340966189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aabc9389d17f5de1aaa5a7136c4ae299932937e4421e17a2d784ff650abf86af,PodSandboxId:979eeec2143f145ad97d036e56398ed9917dd546b5c902287e13214f4763b91d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1743704158213968311,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 70b18ed9-c3b9-4c7b-83b1-fc83571346b9,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8a1fec43be64d1836bcf6440798ac00ca83e022b013722a14891cb27ce78ec,PodSandboxId:db41adc95944ea1d4b284d2123c409ecfed763991752fe216454e396977a2b4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1743704078094832582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37f82061-84c9-4077-b38d-c8cf2a067e89,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931198b4671e48ff1b729062999926d7c7d9e1aee37ae476fb0f962c898a556,PodSandboxId:19db2d0c3c6df93232b7568e728dd0ffac34d429e5053504e32e69e5ca0eb2a7,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1743704064660614330,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rqlmj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d4e75281-7657-4e9f-aaa3-5c2237a53782,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:454dbb85dca5fe8687b6938bd7ffd90a8bbe29e9165b9767e8ee9d01263ec1c1,PodSandboxId:e82e4b25fd1101e48c72b96cffeda5df7d684203e0286ca65e748eab292f318e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743704048783807422,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7jjvv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 54485935-e1f9-404a-9df0-e1c397d07b1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f7e4d3a1d5d4e4cae3547d5ecb2668b3d7a9b9e6abec351de218b63b2db52f,PodSandboxId:7b7b65c29f202e637e4ba4925f4def6013ce5b32b06ff0192d0e8d59f31f0982,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743704048665825326,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6dmq2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efe783b9-d665-4fab-9fb6-c7bc290b9891,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4fee71a78d30037da5b44853c735972fe5194dc482b3a734647a0eff43b488,PodSandboxId:9eb3936b0f73eb57d4d07c7f0d2aea346a8b53685d7c09215cdca044b85f36fe,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1743704016477178580,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7p29k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d39cd37-af72-4a14-8531-1f66cd7238c7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f597c251d0fef19aace1ddfddcdcf3425e265faae0a89cf73a4f6d00dc345f,PodSandboxId:132a8e6d3ead0d4e5ff53afddbfaaff0137bd11fcdb8fbb78129f30652e3fdc4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1743704000133837527,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c57015-07a6-47fa-bc0f-bcc2e7195ec5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a578cd03e607a81e13022d66fdb2b5370cae784f55bc30f816da04a7ec1ec6,PodSandboxId:b6f69a31423dad6b935bf780be05d07264e870732ae4d77bb7eaeb36bb2ff2ee,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743703990928808585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0eddf727-033f-41ff-b2f9-c29be537cb75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0014cbae1c33eb826b1364bacd37c37fbed6efe45c9e8b0044b6a8eb13b0,PodSandboxId:6155c1e959dc322c5e26b7211e47b150dfc69843bfd94701944e9c32bfd1585d,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743703989219003606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-78bx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7330318b-714b-4090-b6cd-29d5cd13118c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:ee7ca7e19fbe52c634322c36e9e254da83b5ee527ee4aba9ba98fe071a624e6e,PodSandboxId:f5bff2f2e330ddaf8bcf13e094f4b89ad06c429903a127c2a1492fc3ac27b477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743703986435654476,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkjxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60927c3-2084-4e4d-b850-53787b6f9165,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de56db66b83b496b2fe36137a1c14
13e0845a48ab0559558bbae45ed191745b8,PodSandboxId:17d3410162ee9ecce959f04bf5179376a88951fa2101033eb1834cd73f689471,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743703975688005566,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2970be934eaf3d47ca89e34c232c673a,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532ea6c67d54dfafb5141b6ed60762c19033836b8689f0
fccc615240bfd9a1ae,PodSandboxId:149b4cf073fb1734773abc7b168082c535c697333bfaf99b85a960b2cf5b8f0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743703975683366049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7b656147d41866d233d75a72031852,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504c41ff9805b63f9a6994be4d2f5364d5c5
e35e19794b1be6a48583371eaca4,PodSandboxId:b510599559715db6e9a4a7a61828f0dd8992862b2239a7bd94690a740164ce42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743703975690957262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961e1e4cd651808ef42ccdc558561ee2,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e65731e586849c3ba8ffa3e3957b7bf764832da4913ebe03a36ecbd6cc7f5e7,PodSandboxId:e16d8
a71deed5971d83e43eb7052eae2e7f4a38fc37ecfef309bdd5c642b955a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743703975672889528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dade4ac5fa67eaef7d137deee211da6f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b53fbb1-f881-47f4-bd83-1a71f8e34b1e name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.374142520Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4ad8a75-9d22-4015-9483-4666ca3959f6 name=/runtime.v1.RuntimeService/Version
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.374280449Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4ad8a75-9d22-4015-9483-4666ca3959f6 name=/runtime.v1.RuntimeService/Version
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.375279250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23eedaa2-7c3d-4dad-a4ed-50f19958347c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.376600265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743704295376575241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23eedaa2-7c3d-4dad-a4ed-50f19958347c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.377003066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3fd17b3-d28e-4d8c-b8c3-d723527c1cf1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.377076424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3fd17b3-d28e-4d8c-b8c3-d723527c1cf1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.377422906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aabc9389d17f5de1aaa5a7136c4ae299932937e4421e17a2d784ff650abf86af,PodSandboxId:979eeec2143f145ad97d036e56398ed9917dd546b5c902287e13214f4763b91d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1743704158213968311,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 70b18ed9-c3b9-4c7b-83b1-fc83571346b9,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8a1fec43be64d1836bcf6440798ac00ca83e022b013722a14891cb27ce78ec,PodSandboxId:db41adc95944ea1d4b284d2123c409ecfed763991752fe216454e396977a2b4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1743704078094832582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37f82061-84c9-4077-b38d-c8cf2a067e89,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931198b4671e48ff1b729062999926d7c7d9e1aee37ae476fb0f962c898a556,PodSandboxId:19db2d0c3c6df93232b7568e728dd0ffac34d429e5053504e32e69e5ca0eb2a7,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1743704064660614330,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rqlmj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d4e75281-7657-4e9f-aaa3-5c2237a53782,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:454dbb85dca5fe8687b6938bd7ffd90a8bbe29e9165b9767e8ee9d01263ec1c1,PodSandboxId:e82e4b25fd1101e48c72b96cffeda5df7d684203e0286ca65e748eab292f318e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743704048783807422,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7jjvv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 54485935-e1f9-404a-9df0-e1c397d07b1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f7e4d3a1d5d4e4cae3547d5ecb2668b3d7a9b9e6abec351de218b63b2db52f,PodSandboxId:7b7b65c29f202e637e4ba4925f4def6013ce5b32b06ff0192d0e8d59f31f0982,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743704048665825326,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6dmq2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efe783b9-d665-4fab-9fb6-c7bc290b9891,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4fee71a78d30037da5b44853c735972fe5194dc482b3a734647a0eff43b488,PodSandboxId:9eb3936b0f73eb57d4d07c7f0d2aea346a8b53685d7c09215cdca044b85f36fe,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1743704016477178580,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7p29k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d39cd37-af72-4a14-8531-1f66cd7238c7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f597c251d0fef19aace1ddfddcdcf3425e265faae0a89cf73a4f6d00dc345f,PodSandboxId:132a8e6d3ead0d4e5ff53afddbfaaff0137bd11fcdb8fbb78129f30652e3fdc4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1743704000133837527,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c57015-07a6-47fa-bc0f-bcc2e7195ec5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a578cd03e607a81e13022d66fdb2b5370cae784f55bc30f816da04a7ec1ec6,PodSandboxId:b6f69a31423dad6b935bf780be05d07264e870732ae4d77bb7eaeb36bb2ff2ee,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743703990928808585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0eddf727-033f-41ff-b2f9-c29be537cb75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0014cbae1c33eb826b1364bacd37c37fbed6efe45c9e8b0044b6a8eb13b0,PodSandboxId:6155c1e959dc322c5e26b7211e47b150dfc69843bfd94701944e9c32bfd1585d,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743703989219003606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-78bx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7330318b-714b-4090-b6cd-29d5cd13118c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:ee7ca7e19fbe52c634322c36e9e254da83b5ee527ee4aba9ba98fe071a624e6e,PodSandboxId:f5bff2f2e330ddaf8bcf13e094f4b89ad06c429903a127c2a1492fc3ac27b477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743703986435654476,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkjxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60927c3-2084-4e4d-b850-53787b6f9165,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de56db66b83b496b2fe36137a1c14
13e0845a48ab0559558bbae45ed191745b8,PodSandboxId:17d3410162ee9ecce959f04bf5179376a88951fa2101033eb1834cd73f689471,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743703975688005566,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2970be934eaf3d47ca89e34c232c673a,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532ea6c67d54dfafb5141b6ed60762c19033836b8689f0
fccc615240bfd9a1ae,PodSandboxId:149b4cf073fb1734773abc7b168082c535c697333bfaf99b85a960b2cf5b8f0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743703975683366049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7b656147d41866d233d75a72031852,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504c41ff9805b63f9a6994be4d2f5364d5c5
e35e19794b1be6a48583371eaca4,PodSandboxId:b510599559715db6e9a4a7a61828f0dd8992862b2239a7bd94690a740164ce42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743703975690957262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961e1e4cd651808ef42ccdc558561ee2,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e65731e586849c3ba8ffa3e3957b7bf764832da4913ebe03a36ecbd6cc7f5e7,PodSandboxId:e16d8
a71deed5971d83e43eb7052eae2e7f4a38fc37ecfef309bdd5c642b955a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743703975672889528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dade4ac5fa67eaef7d137deee211da6f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3fd17b3-d28e-4d8c-b8c3-d723527c1cf1 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.410852556Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c65eac90-8ae8-4b00-bf09-e9b701d1be84 name=/runtime.v1.RuntimeService/Version
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.410931054Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c65eac90-8ae8-4b00-bf09-e9b701d1be84 name=/runtime.v1.RuntimeService/Version
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.411760786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ace560ed-1298-4c24-baa7-f0970f88130b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.413354955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743704295413328026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ace560ed-1298-4c24-baa7-f0970f88130b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.413803644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d72c8f18-2a05-4021-adbb-8d765d71f4be name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.413858130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d72c8f18-2a05-4021-adbb-8d765d71f4be name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.414163656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aabc9389d17f5de1aaa5a7136c4ae299932937e4421e17a2d784ff650abf86af,PodSandboxId:979eeec2143f145ad97d036e56398ed9917dd546b5c902287e13214f4763b91d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1743704158213968311,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 70b18ed9-c3b9-4c7b-83b1-fc83571346b9,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8a1fec43be64d1836bcf6440798ac00ca83e022b013722a14891cb27ce78ec,PodSandboxId:db41adc95944ea1d4b284d2123c409ecfed763991752fe216454e396977a2b4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1743704078094832582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37f82061-84c9-4077-b38d-c8cf2a067e89,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931198b4671e48ff1b729062999926d7c7d9e1aee37ae476fb0f962c898a556,PodSandboxId:19db2d0c3c6df93232b7568e728dd0ffac34d429e5053504e32e69e5ca0eb2a7,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1743704064660614330,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rqlmj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d4e75281-7657-4e9f-aaa3-5c2237a53782,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:454dbb85dca5fe8687b6938bd7ffd90a8bbe29e9165b9767e8ee9d01263ec1c1,PodSandboxId:e82e4b25fd1101e48c72b96cffeda5df7d684203e0286ca65e748eab292f318e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743704048783807422,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7jjvv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 54485935-e1f9-404a-9df0-e1c397d07b1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f7e4d3a1d5d4e4cae3547d5ecb2668b3d7a9b9e6abec351de218b63b2db52f,PodSandboxId:7b7b65c29f202e637e4ba4925f4def6013ce5b32b06ff0192d0e8d59f31f0982,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743704048665825326,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6dmq2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efe783b9-d665-4fab-9fb6-c7bc290b9891,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4fee71a78d30037da5b44853c735972fe5194dc482b3a734647a0eff43b488,PodSandboxId:9eb3936b0f73eb57d4d07c7f0d2aea346a8b53685d7c09215cdca044b85f36fe,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1743704016477178580,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7p29k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d39cd37-af72-4a14-8531-1f66cd7238c7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f597c251d0fef19aace1ddfddcdcf3425e265faae0a89cf73a4f6d00dc345f,PodSandboxId:132a8e6d3ead0d4e5ff53afddbfaaff0137bd11fcdb8fbb78129f30652e3fdc4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1743704000133837527,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c57015-07a6-47fa-bc0f-bcc2e7195ec5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a578cd03e607a81e13022d66fdb2b5370cae784f55bc30f816da04a7ec1ec6,PodSandboxId:b6f69a31423dad6b935bf780be05d07264e870732ae4d77bb7eaeb36bb2ff2ee,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743703990928808585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0eddf727-033f-41ff-b2f9-c29be537cb75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0014cbae1c33eb826b1364bacd37c37fbed6efe45c9e8b0044b6a8eb13b0,PodSandboxId:6155c1e959dc322c5e26b7211e47b150dfc69843bfd94701944e9c32bfd1585d,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743703989219003606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-78bx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7330318b-714b-4090-b6cd-29d5cd13118c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:ee7ca7e19fbe52c634322c36e9e254da83b5ee527ee4aba9ba98fe071a624e6e,PodSandboxId:f5bff2f2e330ddaf8bcf13e094f4b89ad06c429903a127c2a1492fc3ac27b477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743703986435654476,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkjxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60927c3-2084-4e4d-b850-53787b6f9165,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de56db66b83b496b2fe36137a1c14
13e0845a48ab0559558bbae45ed191745b8,PodSandboxId:17d3410162ee9ecce959f04bf5179376a88951fa2101033eb1834cd73f689471,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743703975688005566,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2970be934eaf3d47ca89e34c232c673a,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532ea6c67d54dfafb5141b6ed60762c19033836b8689f0
fccc615240bfd9a1ae,PodSandboxId:149b4cf073fb1734773abc7b168082c535c697333bfaf99b85a960b2cf5b8f0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743703975683366049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7b656147d41866d233d75a72031852,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504c41ff9805b63f9a6994be4d2f5364d5c5
e35e19794b1be6a48583371eaca4,PodSandboxId:b510599559715db6e9a4a7a61828f0dd8992862b2239a7bd94690a740164ce42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743703975690957262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961e1e4cd651808ef42ccdc558561ee2,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e65731e586849c3ba8ffa3e3957b7bf764832da4913ebe03a36ecbd6cc7f5e7,PodSandboxId:e16d8
a71deed5971d83e43eb7052eae2e7f4a38fc37ecfef309bdd5c642b955a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743703975672889528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dade4ac5fa67eaef7d137deee211da6f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d72c8f18-2a05-4021-adbb-8d765d71f4be name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.444954881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9856ed2e-a7e8-4c89-b51e-2e869a85ad61 name=/runtime.v1.RuntimeService/Version
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.445035550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9856ed2e-a7e8-4c89-b51e-2e869a85ad61 name=/runtime.v1.RuntimeService/Version
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.446363094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2ecac0f-4a94-4dfa-996a-6b275afff330 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.447547899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743704295447521138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2ecac0f-4a94-4dfa-996a-6b275afff330 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.448046916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf41384a-9165-4781-9eeb-3dd64e2d460e name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.448102967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf41384a-9165-4781-9eeb-3dd64e2d460e name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 18:18:15 addons-445082 crio[668]: time="2025-04-03 18:18:15.448455021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aabc9389d17f5de1aaa5a7136c4ae299932937e4421e17a2d784ff650abf86af,PodSandboxId:979eeec2143f145ad97d036e56398ed9917dd546b5c902287e13214f4763b91d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1743704158213968311,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 70b18ed9-c3b9-4c7b-83b1-fc83571346b9,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e8a1fec43be64d1836bcf6440798ac00ca83e022b013722a14891cb27ce78ec,PodSandboxId:db41adc95944ea1d4b284d2123c409ecfed763991752fe216454e396977a2b4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1743704078094832582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37f82061-84c9-4077-b38d-c8cf2a067e89,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6931198b4671e48ff1b729062999926d7c7d9e1aee37ae476fb0f962c898a556,PodSandboxId:19db2d0c3c6df93232b7568e728dd0ffac34d429e5053504e32e69e5ca0eb2a7,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1743704064660614330,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rqlmj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d4e75281-7657-4e9f-aaa3-5c2237a53782,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:454dbb85dca5fe8687b6938bd7ffd90a8bbe29e9165b9767e8ee9d01263ec1c1,PodSandboxId:e82e4b25fd1101e48c72b96cffeda5df7d684203e0286ca65e748eab292f318e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743704048783807422,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7jjvv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 54485935-e1f9-404a-9df0-e1c397d07b1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f7e4d3a1d5d4e4cae3547d5ecb2668b3d7a9b9e6abec351de218b63b2db52f,PodSandboxId:7b7b65c29f202e637e4ba4925f4def6013ce5b32b06ff0192d0e8d59f31f0982,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743704048665825326,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6dmq2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efe783b9-d665-4fab-9fb6-c7bc290b9891,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4fee71a78d30037da5b44853c735972fe5194dc482b3a734647a0eff43b488,PodSandboxId:9eb3936b0f73eb57d4d07c7f0d2aea346a8b53685d7c09215cdca044b85f36fe,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1743704016477178580,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7p29k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d39cd37-af72-4a14-8531-1f66cd7238c7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f597c251d0fef19aace1ddfddcdcf3425e265faae0a89cf73a4f6d00dc345f,PodSandboxId:132a8e6d3ead0d4e5ff53afddbfaaff0137bd11fcdb8fbb78129f30652e3fdc4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1743704000133837527,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c57015-07a6-47fa-bc0f-bcc2e7195ec5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a578cd03e607a81e13022d66fdb2b5370cae784f55bc30f816da04a7ec1ec6,PodSandboxId:b6f69a31423dad6b935bf780be05d07264e870732ae4d77bb7eaeb36bb2ff2ee,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743703990928808585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0eddf727-033f-41ff-b2f9-c29be537cb75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aec0014cbae1c33eb826b1364bacd37c37fbed6efe45c9e8b0044b6a8eb13b0,PodSandboxId:6155c1e959dc322c5e26b7211e47b150dfc69843bfd94701944e9c32bfd1585d,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743703989219003606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-78bx4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7330318b-714b-4090-b6cd-29d5cd13118c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:ee7ca7e19fbe52c634322c36e9e254da83b5ee527ee4aba9ba98fe071a624e6e,PodSandboxId:f5bff2f2e330ddaf8bcf13e094f4b89ad06c429903a127c2a1492fc3ac27b477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743703986435654476,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bkjxx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60927c3-2084-4e4d-b850-53787b6f9165,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de56db66b83b496b2fe36137a1c14
13e0845a48ab0559558bbae45ed191745b8,PodSandboxId:17d3410162ee9ecce959f04bf5179376a88951fa2101033eb1834cd73f689471,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743703975688005566,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2970be934eaf3d47ca89e34c232c673a,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532ea6c67d54dfafb5141b6ed60762c19033836b8689f0
fccc615240bfd9a1ae,PodSandboxId:149b4cf073fb1734773abc7b168082c535c697333bfaf99b85a960b2cf5b8f0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743703975683366049,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7b656147d41866d233d75a72031852,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504c41ff9805b63f9a6994be4d2f5364d5c5
e35e19794b1be6a48583371eaca4,PodSandboxId:b510599559715db6e9a4a7a61828f0dd8992862b2239a7bd94690a740164ce42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743703975690957262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 961e1e4cd651808ef42ccdc558561ee2,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e65731e586849c3ba8ffa3e3957b7bf764832da4913ebe03a36ecbd6cc7f5e7,PodSandboxId:e16d8
a71deed5971d83e43eb7052eae2e7f4a38fc37ecfef309bdd5c642b955a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743703975672889528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-445082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dade4ac5fa67eaef7d137deee211da6f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf41384a-9165-4781-9eeb-3dd64e2d460e name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aabc9389d17f5       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   979eeec2143f1       nginx
	8e8a1fec43be6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   db41adc95944e       busybox
	6931198b4671e       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   19db2d0c3c6df       ingress-nginx-controller-56d7c84fd4-rqlmj
	454dbb85dca5f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              patch                     0                   e82e4b25fd110       ingress-nginx-admission-patch-7jjvv
	b2f7e4d3a1d5d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   7b7b65c29f202       ingress-nginx-admission-create-6dmq2
	2b4fee71a78d3       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   9eb3936b0f73e       amd-gpu-device-plugin-7p29k
	03f597c251d0f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   132a8e6d3ead0       kube-ingress-dns-minikube
	f6a578cd03e60       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   b6f69a31423da       storage-provisioner
	4aec0014cbae1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   6155c1e959dc3       coredns-668d6bf9bc-78bx4
	ee7ca7e19fbe5       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             5 minutes ago       Running             kube-proxy                0                   f5bff2f2e330d       kube-proxy-bkjxx
	504c41ff9805b       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             5 minutes ago       Running             etcd                      0                   b510599559715       etcd-addons-445082
	de56db66b83b4       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             5 minutes ago       Running             kube-scheduler            0                   17d3410162ee9       kube-scheduler-addons-445082
	532ea6c67d54d       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             5 minutes ago       Running             kube-controller-manager   0                   149b4cf073fb1       kube-controller-manager-addons-445082
	7e65731e58684       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             5 minutes ago       Running             kube-apiserver            0                   e16d8a71deed5       kube-apiserver-addons-445082
	
	
	==> coredns [4aec0014cbae1c33eb826b1364bacd37c37fbed6efe45c9e8b0044b6a8eb13b0] <==
	[INFO] 10.244.0.8:41464 - 50135 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000444195s
	[INFO] 10.244.0.8:41464 - 18397 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000079765s
	[INFO] 10.244.0.8:41464 - 8244 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000107263s
	[INFO] 10.244.0.8:41464 - 40690 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000084375s
	[INFO] 10.244.0.8:41464 - 17376 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000196384s
	[INFO] 10.244.0.8:41464 - 38497 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000087178s
	[INFO] 10.244.0.8:41464 - 16448 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000073213s
	[INFO] 10.244.0.8:60918 - 3733 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000271889s
	[INFO] 10.244.0.8:60918 - 3993 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001117s
	[INFO] 10.244.0.8:40627 - 46585 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122409s
	[INFO] 10.244.0.8:40627 - 46832 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110002s
	[INFO] 10.244.0.8:52636 - 11050 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000170478s
	[INFO] 10.244.0.8:52636 - 11400 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112541s
	[INFO] 10.244.0.8:52435 - 50908 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116296s
	[INFO] 10.244.0.8:52435 - 51135 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000152795s
	[INFO] 10.244.0.23:33096 - 27729 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000484316s
	[INFO] 10.244.0.23:38344 - 34557 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147437s
	[INFO] 10.244.0.23:44028 - 53462 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122251s
	[INFO] 10.244.0.23:41014 - 63868 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110687s
	[INFO] 10.244.0.23:59479 - 59625 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086354s
	[INFO] 10.244.0.23:43201 - 40217 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064644s
	[INFO] 10.244.0.23:60191 - 7532 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001236584s
	[INFO] 10.244.0.23:36157 - 13743 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000873605s
	[INFO] 10.244.0.26:52685 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000323336s
	[INFO] 10.244.0.26:45087 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116338s
	
	
	==> describe nodes <==
	Name:               addons-445082
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-445082
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053
	                    minikube.k8s.io/name=addons-445082
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_03T18_13_01_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-445082
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 03 Apr 2025 18:12:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-445082
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 03 Apr 2025 18:18:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 03 Apr 2025 18:16:34 +0000   Thu, 03 Apr 2025 18:12:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 03 Apr 2025 18:16:34 +0000   Thu, 03 Apr 2025 18:12:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 03 Apr 2025 18:16:34 +0000   Thu, 03 Apr 2025 18:12:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 03 Apr 2025 18:16:34 +0000   Thu, 03 Apr 2025 18:13:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    addons-445082
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 2dd04b3f3edc47a3b0f7d5eea9494381
	  System UUID:                2dd04b3f-3edc-47a3-b0f7-d5eea9494381
	  Boot ID:                    89a96650-899c-421a-acb3-1110a74f98c6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  default                     hello-world-app-7d9564db4-hfz5t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m53s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-rqlmj    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m3s
	  kube-system                 amd-gpu-device-plugin-7p29k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 coredns-668d6bf9bc-78bx4                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m10s
	  kube-system                 etcd-addons-445082                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m15s
	  kube-system                 kube-apiserver-addons-445082                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-controller-manager-addons-445082        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-proxy-bkjxx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-scheduler-addons-445082                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m8s   kube-proxy       
	  Normal  Starting                 5m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m15s  kubelet          Node addons-445082 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m15s  kubelet          Node addons-445082 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m15s  kubelet          Node addons-445082 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m14s  kubelet          Node addons-445082 status is now: NodeReady
	  Normal  RegisteredNode           5m11s  node-controller  Node addons-445082 event: Registered Node addons-445082 in Controller
	
	
	==> dmesg <==
	[  +5.014856] kauditd_printk_skb: 99 callbacks suppressed
	[  +5.228039] kauditd_printk_skb: 139 callbacks suppressed
	[ +18.271083] kauditd_printk_skb: 77 callbacks suppressed
	[ +19.445893] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.719488] kauditd_printk_skb: 2 callbacks suppressed
	[Apr 3 18:14] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.105929] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.111115] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.260473] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.135977] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.422318] kauditd_printk_skb: 18 callbacks suppressed
	[ +15.818424] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.851987] kauditd_printk_skb: 6 callbacks suppressed
	[Apr 3 18:15] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.897087] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.842688] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.418159] kauditd_printk_skb: 3 callbacks suppressed
	[ +21.572003] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.169078] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.573186] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.392865] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 3 18:16] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.566836] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.139751] kauditd_printk_skb: 16 callbacks suppressed
	[ +22.697998] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [504c41ff9805b63f9a6994be4d2f5364d5c5e35e19794b1be6a48583371eaca4] <==
	{"level":"warn","ts":"2025-04-03T18:14:15.146693Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.881739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T18:14:15.149024Z","caller":"traceutil/trace.go:171","msg":"trace[336154354] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1045; }","duration":"251.231705ms","start":"2025-04-03T18:14:14.897780Z","end":"2025-04-03T18:14:15.149011Z","steps":["trace[336154354] 'agreement among raft nodes before linearized reading'  (duration: 248.887579ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T18:14:21.934486Z","caller":"traceutil/trace.go:171","msg":"trace[133639114] linearizableReadLoop","detail":"{readStateIndex:1109; appliedIndex:1108; }","duration":"227.425309ms","start":"2025-04-03T18:14:21.707040Z","end":"2025-04-03T18:14:21.934465Z","steps":["trace[133639114] 'read index received'  (duration: 227.243606ms)","trace[133639114] 'applied index is now lower than readState.Index'  (duration: 181.159µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-03T18:14:21.934577Z","caller":"traceutil/trace.go:171","msg":"trace[1640998058] transaction","detail":"{read_only:false; response_revision:1076; number_of_response:1; }","duration":"445.229187ms","start":"2025-04-03T18:14:21.489341Z","end":"2025-04-03T18:14:21.934571Z","steps":["trace[1640998058] 'process raft request'  (duration: 444.941845ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T18:14:21.934688Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T18:14:21.489326Z","time spent":"445.26993ms","remote":"127.0.0.1:56466","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1051 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2025-04-03T18:14:21.934739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.325927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T18:14:21.934772Z","caller":"traceutil/trace.go:171","msg":"trace[23649362] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1076; }","duration":"116.382667ms","start":"2025-04-03T18:14:21.818381Z","end":"2025-04-03T18:14:21.934764Z","steps":["trace[23649362] 'agreement among raft nodes before linearized reading'  (duration: 116.328116ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T18:14:21.934872Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.826759ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T18:14:21.934886Z","caller":"traceutil/trace.go:171","msg":"trace[1571677091] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1076; }","duration":"227.844721ms","start":"2025-04-03T18:14:21.707037Z","end":"2025-04-03T18:14:21.934882Z","steps":["trace[1571677091] 'agreement among raft nodes before linearized reading'  (duration: 227.815087ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T18:14:23.825501Z","caller":"traceutil/trace.go:171","msg":"trace[1252879930] transaction","detail":"{read_only:false; response_revision:1081; number_of_response:1; }","duration":"225.804558ms","start":"2025-04-03T18:14:23.599650Z","end":"2025-04-03T18:14:23.825455Z","steps":["trace[1252879930] 'process raft request'  (duration: 225.606421ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T18:14:23.826164Z","caller":"traceutil/trace.go:171","msg":"trace[177077608] linearizableReadLoop","detail":"{readStateIndex:1114; appliedIndex:1114; }","duration":"119.22978ms","start":"2025-04-03T18:14:23.706913Z","end":"2025-04-03T18:14:23.826143Z","steps":["trace[177077608] 'read index received'  (duration: 119.22116ms)","trace[177077608] 'applied index is now lower than readState.Index'  (duration: 4.921µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-03T18:14:23.826555Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.62865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T18:14:23.826604Z","caller":"traceutil/trace.go:171","msg":"trace[540861949] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"119.704348ms","start":"2025-04-03T18:14:23.706892Z","end":"2025-04-03T18:14:23.826596Z","steps":["trace[540861949] 'agreement among raft nodes before linearized reading'  (duration: 119.636314ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T18:14:26.303902Z","caller":"traceutil/trace.go:171","msg":"trace[1825890715] transaction","detail":"{read_only:false; response_revision:1099; number_of_response:1; }","duration":"120.713501ms","start":"2025-04-03T18:14:26.183169Z","end":"2025-04-03T18:14:26.303882Z","steps":["trace[1825890715] 'process raft request'  (duration: 120.100033ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T18:14:26.317970Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.58853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T18:14:26.318057Z","caller":"traceutil/trace.go:171","msg":"trace[814391358] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1099; }","duration":"111.706234ms","start":"2025-04-03T18:14:26.206340Z","end":"2025-04-03T18:14:26.318046Z","steps":["trace[814391358] 'agreement among raft nodes before linearized reading'  (duration: 106.073743ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T18:15:52.087106Z","caller":"traceutil/trace.go:171","msg":"trace[876447004] transaction","detail":"{read_only:false; response_revision:1522; number_of_response:1; }","duration":"250.386034ms","start":"2025-04-03T18:15:51.836703Z","end":"2025-04-03T18:15:52.087090Z","steps":["trace[876447004] 'process raft request'  (duration: 250.059055ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T18:15:57.360736Z","caller":"traceutil/trace.go:171","msg":"trace[1132410619] linearizableReadLoop","detail":"{readStateIndex:1630; appliedIndex:1629; }","duration":"220.063231ms","start":"2025-04-03T18:15:57.140659Z","end":"2025-04-03T18:15:57.360723Z","steps":["trace[1132410619] 'read index received'  (duration: 219.941905ms)","trace[1132410619] 'applied index is now lower than readState.Index'  (duration: 120.917µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-03T18:15:57.360882Z","caller":"traceutil/trace.go:171","msg":"trace[1344792595] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1570; }","duration":"252.442056ms","start":"2025-04-03T18:15:57.108430Z","end":"2025-04-03T18:15:57.360872Z","steps":["trace[1344792595] 'process raft request'  (duration: 252.19942ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T18:15:57.360944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.851274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T18:15:57.360986Z","caller":"traceutil/trace.go:171","msg":"trace[646419762] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1570; }","duration":"193.941334ms","start":"2025-04-03T18:15:57.167036Z","end":"2025-04-03T18:15:57.360977Z","steps":["trace[646419762] 'agreement among raft nodes before linearized reading'  (duration: 193.85374ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T18:15:57.361115Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.446778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-04-03T18:15:57.361133Z","caller":"traceutil/trace.go:171","msg":"trace[1987754435] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1570; }","duration":"220.492716ms","start":"2025-04-03T18:15:57.140636Z","end":"2025-04-03T18:15:57.361129Z","steps":["trace[1987754435] 'agreement among raft nodes before linearized reading'  (duration: 220.415558ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T18:15:57.361247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.253895ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-04-03T18:15:57.361277Z","caller":"traceutil/trace.go:171","msg":"trace[266988597] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1570; }","duration":"170.301002ms","start":"2025-04-03T18:15:57.190963Z","end":"2025-04-03T18:15:57.361264Z","steps":["trace[266988597] 'agreement among raft nodes before linearized reading'  (duration: 170.153503ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:18:15 up 5 min,  0 users,  load average: 0.64, 0.84, 0.46
	Linux addons-445082 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7e65731e586849c3ba8ffa3e3957b7bf764832da4913ebe03a36ecbd6cc7f5e7] <==
	E0403 18:13:51.900452       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.187.38:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.187.38:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.187.38:443: connect: connection refused" logger="UnhandledError"
	E0403 18:13:51.904471       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.187.38:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.187.38:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.187.38:443: connect: connection refused" logger="UnhandledError"
	I0403 18:13:51.961143       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0403 18:14:44.496840       1 conn.go:339] Error on socket receive: read tcp 192.168.39.130:8443->192.168.39.1:59752: use of closed network connection
	E0403 18:14:44.670395       1 conn.go:339] Error on socket receive: read tcp 192.168.39.130:8443->192.168.39.1:59784: use of closed network connection
	I0403 18:14:53.929307       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.45.168"}
	I0403 18:15:17.159698       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0403 18:15:18.200910       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0403 18:15:22.625679       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0403 18:15:22.815965       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.249.114"}
	I0403 18:15:52.912596       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0403 18:16:01.151448       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0403 18:16:15.747819       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0403 18:16:15.747854       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0403 18:16:15.773347       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0403 18:16:15.773462       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0403 18:16:15.792546       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0403 18:16:15.792681       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0403 18:16:15.890975       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0403 18:16:15.891056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0403 18:16:16.892963       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0403 18:16:16.923255       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0403 18:16:16.923363       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0403 18:16:24.917131       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0403 18:18:14.399382       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.58.175"}
	
	
	==> kube-controller-manager [532ea6c67d54dfafb5141b6ed60762c19033836b8689f0fccc615240bfd9a1ae] <==
	E0403 18:17:19.243148       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0403 18:17:31.023024       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0403 18:17:31.023926       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0403 18:17:31.024799       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0403 18:17:31.024844       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0403 18:17:40.124359       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0403 18:17:40.125490       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0403 18:17:40.126296       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0403 18:17:40.126333       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0403 18:17:43.814556       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0403 18:17:43.816027       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0403 18:17:43.817016       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0403 18:17:43.817105       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0403 18:18:11.106180       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0403 18:18:11.107241       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0403 18:18:11.108044       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0403 18:18:11.108084       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0403 18:18:14.211022       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="30.953231ms"
	I0403 18:18:14.221557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="9.622556ms"
	I0403 18:18:14.221796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="35.914µs"
	I0403 18:18:14.231162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="35.519µs"
	W0403 18:18:14.592845       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0403 18:18:14.594506       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0403 18:18:14.596107       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0403 18:18:14.596180       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [ee7ca7e19fbe52c634322c36e9e254da83b5ee527ee4aba9ba98fe071a624e6e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0403 18:13:07.237262       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0403 18:13:07.268746       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.130"]
	E0403 18:13:07.268834       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0403 18:13:07.665595       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0403 18:13:07.665643       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0403 18:13:07.665665       1 server_linux.go:170] "Using iptables Proxier"
	I0403 18:13:07.668091       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0403 18:13:07.669463       1 server.go:497] "Version info" version="v1.32.2"
	I0403 18:13:07.669478       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0403 18:13:07.674026       1 config.go:199] "Starting service config controller"
	I0403 18:13:07.674063       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0403 18:13:07.674142       1 config.go:105] "Starting endpoint slice config controller"
	I0403 18:13:07.674147       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0403 18:13:07.674679       1 config.go:329] "Starting node config controller"
	I0403 18:13:07.674685       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0403 18:13:07.774550       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0403 18:13:07.774617       1 shared_informer.go:320] Caches are synced for service config
	I0403 18:13:07.774862       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [de56db66b83b496b2fe36137a1c1413e0845a48ab0559558bbae45ed191745b8] <==
	W0403 18:12:57.848066       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0403 18:12:57.848092       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0403 18:12:57.848093       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0403 18:12:57.848161       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0403 18:12:57.848258       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0403 18:12:57.848282       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0403 18:12:57.848265       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0403 18:12:57.848163       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 18:12:58.666482       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0403 18:12:58.666534       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0403 18:12:58.730266       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0403 18:12:58.730412       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0403 18:12:58.832968       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0403 18:12:58.833016       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0403 18:12:58.954634       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0403 18:12:58.954686       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0403 18:12:59.017693       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0403 18:12:59.017739       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 18:12:59.071180       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0403 18:12:59.071920       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 18:12:59.094858       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0403 18:12:59.094949       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 18:12:59.104390       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0403 18:12:59.105279       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0403 18:13:02.143285       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 03 18:17:50 addons-445082 kubelet[1240]: E0403 18:17:50.690861    1240 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743704270690399534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 18:18:00 addons-445082 kubelet[1240]: E0403 18:18:00.308020    1240 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 03 18:18:00 addons-445082 kubelet[1240]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 03 18:18:00 addons-445082 kubelet[1240]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 03 18:18:00 addons-445082 kubelet[1240]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 03 18:18:00 addons-445082 kubelet[1240]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 03 18:18:00 addons-445082 kubelet[1240]: E0403 18:18:00.693852    1240 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743704280693441018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 18:18:00 addons-445082 kubelet[1240]: E0403 18:18:00.694071    1240 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743704280693441018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 18:18:10 addons-445082 kubelet[1240]: E0403 18:18:10.697166    1240 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743704290696681937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 18:18:10 addons-445082 kubelet[1240]: E0403 18:18:10.697480    1240 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743704290696681937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 18:18:11 addons-445082 kubelet[1240]: I0403 18:18:11.294880    1240 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.209924    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="27e36dc8-0ccf-44f6-8f56-e8da2e37cfb3" containerName="csi-external-health-monitor-controller"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.209971    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="9bf5be24-8f38-4726-a72c-4522cefb1f8c" containerName="helper-pod"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.209981    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="27e36dc8-0ccf-44f6-8f56-e8da2e37cfb3" containerName="liveness-probe"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.209987    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="c28389f7-facf-45b1-844a-ab1662d8500b" containerName="volume-snapshot-controller"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.209993    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="27e36dc8-0ccf-44f6-8f56-e8da2e37cfb3" containerName="node-driver-registrar"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.209998    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="27e36dc8-0ccf-44f6-8f56-e8da2e37cfb3" containerName="csi-provisioner"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.210008    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="27e36dc8-0ccf-44f6-8f56-e8da2e37cfb3" containerName="csi-snapshotter"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.210017    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="df74d25c-71ab-453e-9ace-e4db8520fb30" containerName="task-pv-container"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.210023    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="4b43e156-617e-401b-a9ec-d1e9aead78ff" containerName="volume-snapshot-controller"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.210031    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="8bde948e-a992-4991-8173-9ff227977bcb" containerName="csi-attacher"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.210040    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="27e36dc8-0ccf-44f6-8f56-e8da2e37cfb3" containerName="hostpath"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.210047    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="6f9249bd-90e6-4f30-8191-4bb3a0917a50" containerName="local-path-provisioner"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.210052    1240 memory_manager.go:355] "RemoveStaleState removing state" podUID="94934d47-449b-408c-b061-66fdbf83e151" containerName="csi-resizer"
	Apr 03 18:18:14 addons-445082 kubelet[1240]: I0403 18:18:14.363769    1240 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ndcc\" (UniqueName: \"kubernetes.io/projected/6b7e034e-91da-4613-b5f1-586ec2404066-kube-api-access-4ndcc\") pod \"hello-world-app-7d9564db4-hfz5t\" (UID: \"6b7e034e-91da-4613-b5f1-586ec2404066\") " pod="default/hello-world-app-7d9564db4-hfz5t"
	
	
	==> storage-provisioner [f6a578cd03e607a81e13022d66fdb2b5370cae784f55bc30f816da04a7ec1ec6] <==
	I0403 18:13:11.214465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0403 18:13:11.262044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0403 18:13:11.262149       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0403 18:13:11.280341       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0403 18:13:11.280609       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-445082_a78eef73-245d-471b-b987-83f723d9f2a8!
	I0403 18:13:11.303436       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d2eee6e0-5aaa-43cf-8e7d-068e7f7de815", APIVersion:"v1", ResourceVersion:"596", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-445082_a78eef73-245d-471b-b987-83f723d9f2a8 became leader
	I0403 18:13:11.383693       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-445082_a78eef73-245d-471b-b987-83f723d9f2a8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-445082 -n addons-445082
helpers_test.go:261: (dbg) Run:  kubectl --context addons-445082 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-hfz5t ingress-nginx-admission-create-6dmq2 ingress-nginx-admission-patch-7jjvv
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-445082 describe pod hello-world-app-7d9564db4-hfz5t ingress-nginx-admission-create-6dmq2 ingress-nginx-admission-patch-7jjvv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-445082 describe pod hello-world-app-7d9564db4-hfz5t ingress-nginx-admission-create-6dmq2 ingress-nginx-admission-patch-7jjvv: exit status 1 (66.904966ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-hfz5t
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-445082/192.168.39.130
	Start Time:       Thu, 03 Apr 2025 18:18:14 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4ndcc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4ndcc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-hfz5t to addons-445082
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6dmq2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7jjvv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-445082 describe pod hello-world-app-7d9564db4-hfz5t ingress-nginx-admission-create-6dmq2 ingress-nginx-admission-patch-7jjvv: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-445082 addons disable ingress-dns --alsologtostderr -v=1: (1.092347878s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-445082 addons disable ingress --alsologtostderr -v=1: (7.684117858s)
--- FAIL: TestAddons/parallel/Ingress (182.93s)

                                                
                                    
x
+
TestPreload (213.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-159739 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0403 19:09:09.256228   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-159739 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m13.537519652s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-159739 image pull gcr.io/k8s-minikube/busybox
E0403 19:09:34.401246   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-159739 image pull gcr.io/k8s-minikube/busybox: (3.576486795s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-159739
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-159739: (6.582721779s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-159739 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-159739 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.438711159s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-159739 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-04-03 19:10:52.079692193 +0000 UTC m=+3552.590567285
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-159739 -n test-preload-159739
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-159739 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-159739 logs -n 25: (1.009552161s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-953539 ssh -n                                                                 | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC | 03 Apr 25 18:55 UTC |
	|         | multinode-953539-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-953539 ssh -n multinode-953539 sudo cat                                       | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC | 03 Apr 25 18:55 UTC |
	|         | /home/docker/cp-test_multinode-953539-m03_multinode-953539.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-953539 cp multinode-953539-m03:/home/docker/cp-test.txt                       | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC | 03 Apr 25 18:55 UTC |
	|         | multinode-953539-m02:/home/docker/cp-test_multinode-953539-m03_multinode-953539-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-953539 ssh -n                                                                 | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC | 03 Apr 25 18:55 UTC |
	|         | multinode-953539-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-953539 ssh -n multinode-953539-m02 sudo cat                                   | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC | 03 Apr 25 18:55 UTC |
	|         | /home/docker/cp-test_multinode-953539-m03_multinode-953539-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-953539 node stop m03                                                          | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC | 03 Apr 25 18:55 UTC |
	| node    | multinode-953539 node start                                                             | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC | 03 Apr 25 18:55 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-953539                                                                | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC |                     |
	| stop    | -p multinode-953539                                                                     | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 18:55 UTC | 03 Apr 25 18:58 UTC |
	| start   | -p multinode-953539                                                                     | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 18:58 UTC | 03 Apr 25 19:01 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-953539                                                                | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 19:01 UTC |                     |
	| node    | multinode-953539 node delete                                                            | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 19:01 UTC | 03 Apr 25 19:01 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-953539 stop                                                                   | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 19:01 UTC | 03 Apr 25 19:04 UTC |
	| start   | -p multinode-953539                                                                     | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 19:04 UTC | 03 Apr 25 19:06 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-953539                                                                | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 19:06 UTC |                     |
	| start   | -p multinode-953539-m02                                                                 | multinode-953539-m02 | jenkins | v1.35.0 | 03 Apr 25 19:06 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-953539-m03                                                                 | multinode-953539-m03 | jenkins | v1.35.0 | 03 Apr 25 19:06 UTC | 03 Apr 25 19:07 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-953539                                                                 | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 19:07 UTC |                     |
	| delete  | -p multinode-953539-m03                                                                 | multinode-953539-m03 | jenkins | v1.35.0 | 03 Apr 25 19:07 UTC | 03 Apr 25 19:07 UTC |
	| delete  | -p multinode-953539                                                                     | multinode-953539     | jenkins | v1.35.0 | 03 Apr 25 19:07 UTC | 03 Apr 25 19:07 UTC |
	| start   | -p test-preload-159739                                                                  | test-preload-159739  | jenkins | v1.35.0 | 03 Apr 25 19:07 UTC | 03 Apr 25 19:09 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-159739 image pull                                                          | test-preload-159739  | jenkins | v1.35.0 | 03 Apr 25 19:09 UTC | 03 Apr 25 19:09 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-159739                                                                  | test-preload-159739  | jenkins | v1.35.0 | 03 Apr 25 19:09 UTC | 03 Apr 25 19:09 UTC |
	| start   | -p test-preload-159739                                                                  | test-preload-159739  | jenkins | v1.35.0 | 03 Apr 25 19:09 UTC | 03 Apr 25 19:10 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-159739 image list                                                          | test-preload-159739  | jenkins | v1.35.0 | 03 Apr 25 19:10 UTC | 03 Apr 25 19:10 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 19:09:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 19:09:44.476235   53204 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:09:44.476481   53204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:09:44.476491   53204 out.go:358] Setting ErrFile to fd 2...
	I0403 19:09:44.476494   53204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:09:44.476643   53204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:09:44.477166   53204 out.go:352] Setting JSON to false
	I0403 19:09:44.478081   53204 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6729,"bootTime":1743700655,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:09:44.478175   53204 start.go:139] virtualization: kvm guest
	I0403 19:09:44.480034   53204 out.go:177] * [test-preload-159739] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:09:44.481173   53204 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:09:44.481180   53204 notify.go:220] Checking for updates...
	I0403 19:09:44.482266   53204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:09:44.483388   53204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:09:44.484398   53204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:09:44.485406   53204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:09:44.486530   53204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:09:44.488120   53204 config.go:182] Loaded profile config "test-preload-159739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0403 19:09:44.488723   53204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:09:44.488782   53204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:09:44.503534   53204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I0403 19:09:44.503979   53204 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:09:44.504456   53204 main.go:141] libmachine: Using API Version  1
	I0403 19:09:44.504486   53204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:09:44.504836   53204 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:09:44.504989   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	I0403 19:09:44.506428   53204 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0403 19:09:44.507329   53204 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:09:44.507584   53204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:09:44.507614   53204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:09:44.521509   53204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45545
	I0403 19:09:44.521853   53204 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:09:44.522214   53204 main.go:141] libmachine: Using API Version  1
	I0403 19:09:44.522235   53204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:09:44.522555   53204 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:09:44.522741   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	I0403 19:09:44.554499   53204 out.go:177] * Using the kvm2 driver based on existing profile
	I0403 19:09:44.555574   53204 start.go:297] selected driver: kvm2
	I0403 19:09:44.555589   53204 start.go:901] validating driver "kvm2" against &{Name:test-preload-159739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-159739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:09:44.555711   53204 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:09:44.556415   53204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:09:44.556493   53204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 19:09:44.570283   53204 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0403 19:09:44.570626   53204 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:09:44.570659   53204 cni.go:84] Creating CNI manager for ""
	I0403 19:09:44.570714   53204 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:09:44.570776   53204 start.go:340] cluster config:
	{Name:test-preload-159739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-159739 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:09:44.570905   53204 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:09:44.572332   53204 out.go:177] * Starting "test-preload-159739" primary control-plane node in "test-preload-159739" cluster
	I0403 19:09:44.573283   53204 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0403 19:09:44.674683   53204 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0403 19:09:44.674713   53204 cache.go:56] Caching tarball of preloaded images
	I0403 19:09:44.674874   53204 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0403 19:09:44.676614   53204 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0403 19:09:44.677609   53204 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0403 19:09:44.783247   53204 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0403 19:09:55.756918   53204 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0403 19:09:55.757012   53204 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0403 19:09:56.595110   53204 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0403 19:09:56.595222   53204 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/config.json ...
	I0403 19:09:56.595442   53204 start.go:360] acquireMachinesLock for test-preload-159739: {Name:mk8972215f0ab94ca7966bf5adf18262e19bccd0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0403 19:09:56.595500   53204 start.go:364] duration metric: took 39.381µs to acquireMachinesLock for "test-preload-159739"
	I0403 19:09:56.595514   53204 start.go:96] Skipping create...Using existing machine configuration
	I0403 19:09:56.595518   53204 fix.go:54] fixHost starting: 
	I0403 19:09:56.595797   53204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:09:56.595830   53204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:09:56.610341   53204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0403 19:09:56.610727   53204 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:09:56.611133   53204 main.go:141] libmachine: Using API Version  1
	I0403 19:09:56.611162   53204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:09:56.611489   53204 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:09:56.611652   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	I0403 19:09:56.611781   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetState
	I0403 19:09:56.613257   53204 fix.go:112] recreateIfNeeded on test-preload-159739: state=Stopped err=<nil>
	I0403 19:09:56.613277   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	W0403 19:09:56.613419   53204 fix.go:138] unexpected machine state, will restart: <nil>
	I0403 19:09:56.615549   53204 out.go:177] * Restarting existing kvm2 VM for "test-preload-159739" ...
	I0403 19:09:56.616645   53204 main.go:141] libmachine: (test-preload-159739) Calling .Start
	I0403 19:09:56.616808   53204 main.go:141] libmachine: (test-preload-159739) starting domain...
	I0403 19:09:56.616824   53204 main.go:141] libmachine: (test-preload-159739) ensuring networks are active...
	I0403 19:09:56.617556   53204 main.go:141] libmachine: (test-preload-159739) Ensuring network default is active
	I0403 19:09:56.617849   53204 main.go:141] libmachine: (test-preload-159739) Ensuring network mk-test-preload-159739 is active
	I0403 19:09:56.618216   53204 main.go:141] libmachine: (test-preload-159739) getting domain XML...
	I0403 19:09:56.618944   53204 main.go:141] libmachine: (test-preload-159739) creating domain...
	I0403 19:09:57.802721   53204 main.go:141] libmachine: (test-preload-159739) waiting for IP...
	I0403 19:09:57.803521   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:09:57.803919   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:09:57.803954   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:09:57.803882   53288 retry.go:31] will retry after 241.612502ms: waiting for domain to come up
	I0403 19:09:58.047421   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:09:58.047817   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:09:58.047847   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:09:58.047788   53288 retry.go:31] will retry after 286.949232ms: waiting for domain to come up
	I0403 19:09:58.336292   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:09:58.336694   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:09:58.336723   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:09:58.336658   53288 retry.go:31] will retry after 457.45243ms: waiting for domain to come up
	I0403 19:09:58.795348   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:09:58.795750   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:09:58.795780   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:09:58.795720   53288 retry.go:31] will retry after 533.253026ms: waiting for domain to come up
	I0403 19:09:59.330289   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:09:59.330688   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:09:59.330719   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:09:59.330634   53288 retry.go:31] will retry after 532.405074ms: waiting for domain to come up
	I0403 19:09:59.864142   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:09:59.864455   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:09:59.864500   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:09:59.864439   53288 retry.go:31] will retry after 889.679213ms: waiting for domain to come up
	I0403 19:10:00.755405   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:00.755985   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:10:00.756012   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:10:00.755929   53288 retry.go:31] will retry after 1.05903531s: waiting for domain to come up
	I0403 19:10:01.816390   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:01.816751   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:10:01.816768   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:10:01.816725   53288 retry.go:31] will retry after 1.016891459s: waiting for domain to come up
	I0403 19:10:02.834834   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:02.835227   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:10:02.835254   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:10:02.835180   53288 retry.go:31] will retry after 1.35680028s: waiting for domain to come up
	I0403 19:10:04.193040   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:04.193403   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:10:04.193432   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:10:04.193373   53288 retry.go:31] will retry after 1.704576794s: waiting for domain to come up
	I0403 19:10:05.900222   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:05.900644   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:10:05.900709   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:10:05.900646   53288 retry.go:31] will retry after 1.93867191s: waiting for domain to come up
	I0403 19:10:07.840934   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:07.841368   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:10:07.841393   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:10:07.841348   53288 retry.go:31] will retry after 2.815068257s: waiting for domain to come up
	I0403 19:10:10.660217   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:10.660519   53204 main.go:141] libmachine: (test-preload-159739) DBG | unable to find current IP address of domain test-preload-159739 in network mk-test-preload-159739
	I0403 19:10:10.660545   53204 main.go:141] libmachine: (test-preload-159739) DBG | I0403 19:10:10.660474   53288 retry.go:31] will retry after 3.10261623s: waiting for domain to come up
	I0403 19:10:13.766016   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:13.766482   53204 main.go:141] libmachine: (test-preload-159739) found domain IP: 192.168.39.100
	I0403 19:10:13.766511   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has current primary IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:13.766517   53204 main.go:141] libmachine: (test-preload-159739) reserving static IP address...
	I0403 19:10:13.766963   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "test-preload-159739", mac: "52:54:00:31:2e:ad", ip: "192.168.39.100"} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:13.766985   53204 main.go:141] libmachine: (test-preload-159739) DBG | skip adding static IP to network mk-test-preload-159739 - found existing host DHCP lease matching {name: "test-preload-159739", mac: "52:54:00:31:2e:ad", ip: "192.168.39.100"}
	I0403 19:10:13.767000   53204 main.go:141] libmachine: (test-preload-159739) reserved static IP address 192.168.39.100 for domain test-preload-159739
	I0403 19:10:13.767014   53204 main.go:141] libmachine: (test-preload-159739) waiting for SSH...
	I0403 19:10:13.767025   53204 main.go:141] libmachine: (test-preload-159739) DBG | Getting to WaitForSSH function...
	I0403 19:10:13.768957   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:13.769234   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:13.769258   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:13.769363   53204 main.go:141] libmachine: (test-preload-159739) DBG | Using SSH client type: external
	I0403 19:10:13.769381   53204 main.go:141] libmachine: (test-preload-159739) DBG | Using SSH private key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/test-preload-159739/id_rsa (-rw-------)
	I0403 19:10:13.769448   53204 main.go:141] libmachine: (test-preload-159739) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20591-14371/.minikube/machines/test-preload-159739/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0403 19:10:13.769474   53204 main.go:141] libmachine: (test-preload-159739) DBG | About to run SSH command:
	I0403 19:10:13.769488   53204 main.go:141] libmachine: (test-preload-159739) DBG | exit 0
	I0403 19:10:13.890609   53204 main.go:141] libmachine: (test-preload-159739) DBG | SSH cmd err, output: <nil>: 
	I0403 19:10:13.891015   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetConfigRaw
	I0403 19:10:13.891760   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetIP
	I0403 19:10:13.894279   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:13.894586   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:13.894612   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:13.894885   53204 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/config.json ...
	I0403 19:10:13.895100   53204 machine.go:93] provisionDockerMachine start ...
	I0403 19:10:13.895130   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	I0403 19:10:13.895337   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:13.897658   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:13.897944   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:13.897967   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:13.898084   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHPort
	I0403 19:10:13.898258   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:13.898434   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:13.898561   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHUsername
	I0403 19:10:13.898748   53204 main.go:141] libmachine: Using SSH client type: native
	I0403 19:10:13.899202   53204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0403 19:10:13.899220   53204 main.go:141] libmachine: About to run SSH command:
	hostname
	I0403 19:10:13.994726   53204 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0403 19:10:13.994764   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetMachineName
	I0403 19:10:13.995024   53204 buildroot.go:166] provisioning hostname "test-preload-159739"
	I0403 19:10:13.995059   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetMachineName
	I0403 19:10:13.995253   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:13.997869   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:13.998274   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:13.998301   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:13.998406   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHPort
	I0403 19:10:13.998578   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:13.998739   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:13.998874   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHUsername
	I0403 19:10:13.999046   53204 main.go:141] libmachine: Using SSH client type: native
	I0403 19:10:13.999279   53204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0403 19:10:13.999292   53204 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-159739 && echo "test-preload-159739" | sudo tee /etc/hostname
	I0403 19:10:14.107814   53204 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-159739
	
	I0403 19:10:14.107840   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:14.110446   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.110744   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:14.110771   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.111014   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHPort
	I0403 19:10:14.111183   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:14.111354   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:14.111475   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHUsername
	I0403 19:10:14.111614   53204 main.go:141] libmachine: Using SSH client type: native
	I0403 19:10:14.111796   53204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0403 19:10:14.111811   53204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-159739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-159739/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-159739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0403 19:10:14.222360   53204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:10:14.222396   53204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20591-14371/.minikube CaCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20591-14371/.minikube}
	I0403 19:10:14.222445   53204 buildroot.go:174] setting up certificates
	I0403 19:10:14.222457   53204 provision.go:84] configureAuth start
	I0403 19:10:14.222472   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetMachineName
	I0403 19:10:14.222771   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetIP
	I0403 19:10:14.225380   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.225678   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:14.225718   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.225897   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:14.227969   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.228281   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:14.228307   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.228413   53204 provision.go:143] copyHostCerts
	I0403 19:10:14.228468   53204 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem, removing ...
	I0403 19:10:14.228482   53204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem
	I0403 19:10:14.228550   53204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem (1082 bytes)
	I0403 19:10:14.228674   53204 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem, removing ...
	I0403 19:10:14.228685   53204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem
	I0403 19:10:14.228727   53204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem (1123 bytes)
	I0403 19:10:14.228805   53204 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem, removing ...
	I0403 19:10:14.228814   53204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem
	I0403 19:10:14.228846   53204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem (1675 bytes)
	I0403 19:10:14.228913   53204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem org=jenkins.test-preload-159739 san=[127.0.0.1 192.168.39.100 localhost minikube test-preload-159739]
	I0403 19:10:14.261462   53204 provision.go:177] copyRemoteCerts
	I0403 19:10:14.261516   53204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0403 19:10:14.261539   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:14.264044   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.264370   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:14.264396   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.264529   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHPort
	I0403 19:10:14.264690   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:14.264834   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHUsername
	I0403 19:10:14.264991   53204 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/test-preload-159739/id_rsa Username:docker}
	I0403 19:10:14.344524   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0403 19:10:14.366527   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0403 19:10:14.388476   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0403 19:10:14.410188   53204 provision.go:87] duration metric: took 187.719579ms to configureAuth
	I0403 19:10:14.410213   53204 buildroot.go:189] setting minikube options for container-runtime
	I0403 19:10:14.410372   53204 config.go:182] Loaded profile config "test-preload-159739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0403 19:10:14.410464   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:14.413450   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.413767   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:14.413793   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.413940   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHPort
	I0403 19:10:14.414118   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:14.414271   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:14.414400   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHUsername
	I0403 19:10:14.414572   53204 main.go:141] libmachine: Using SSH client type: native
	I0403 19:10:14.414806   53204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0403 19:10:14.414838   53204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0403 19:10:14.628102   53204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0403 19:10:14.628129   53204 machine.go:96] duration metric: took 733.015015ms to provisionDockerMachine
	I0403 19:10:14.628150   53204 start.go:293] postStartSetup for "test-preload-159739" (driver="kvm2")
	I0403 19:10:14.628163   53204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0403 19:10:14.628185   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	I0403 19:10:14.628472   53204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0403 19:10:14.628512   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:14.630939   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.631315   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:14.631341   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.631430   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHPort
	I0403 19:10:14.631586   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:14.631718   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHUsername
	I0403 19:10:14.631826   53204 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/test-preload-159739/id_rsa Username:docker}
	I0403 19:10:14.708604   53204 ssh_runner.go:195] Run: cat /etc/os-release
	I0403 19:10:14.712282   53204 info.go:137] Remote host: Buildroot 2023.02.9
	I0403 19:10:14.712309   53204 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/addons for local assets ...
	I0403 19:10:14.712372   53204 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/files for local assets ...
	I0403 19:10:14.712459   53204 filesync.go:149] local asset: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem -> 215522.pem in /etc/ssl/certs
	I0403 19:10:14.712543   53204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0403 19:10:14.720831   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:10:14.741968   53204 start.go:296] duration metric: took 113.803169ms for postStartSetup
	I0403 19:10:14.742015   53204 fix.go:56] duration metric: took 18.146494949s for fixHost
	I0403 19:10:14.742047   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:14.744679   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.744976   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:14.745005   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.745171   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHPort
	I0403 19:10:14.745342   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:14.745490   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:14.745621   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHUsername
	I0403 19:10:14.745781   53204 main.go:141] libmachine: Using SSH client type: native
	I0403 19:10:14.745965   53204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0403 19:10:14.745976   53204 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0403 19:10:14.843076   53204 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743707414.817893573
	
	I0403 19:10:14.843098   53204 fix.go:216] guest clock: 1743707414.817893573
	I0403 19:10:14.843107   53204 fix.go:229] Guest: 2025-04-03 19:10:14.817893573 +0000 UTC Remote: 2025-04-03 19:10:14.74202714 +0000 UTC m=+30.299227325 (delta=75.866433ms)
	I0403 19:10:14.843141   53204 fix.go:200] guest clock delta is within tolerance: 75.866433ms
	I0403 19:10:14.843146   53204 start.go:83] releasing machines lock for "test-preload-159739", held for 18.247637892s
	I0403 19:10:14.843169   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	I0403 19:10:14.843411   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetIP
	I0403 19:10:14.845908   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.846230   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:14.846341   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.846460   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	I0403 19:10:14.847006   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	I0403 19:10:14.847156   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	I0403 19:10:14.847241   53204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0403 19:10:14.847284   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:14.847402   53204 ssh_runner.go:195] Run: cat /version.json
	I0403 19:10:14.847428   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:14.849730   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.850096   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:14.850122   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.850150   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.850270   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHPort
	I0403 19:10:14.850441   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:14.850538   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:14.850565   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHUsername
	I0403 19:10:14.850573   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:14.850673   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHPort
	I0403 19:10:14.850746   53204 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/test-preload-159739/id_rsa Username:docker}
	I0403 19:10:14.850778   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:14.850901   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHUsername
	I0403 19:10:14.851068   53204 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/test-preload-159739/id_rsa Username:docker}
	I0403 19:10:14.962206   53204 ssh_runner.go:195] Run: systemctl --version
	I0403 19:10:14.968008   53204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0403 19:10:15.107916   53204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0403 19:10:15.113633   53204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0403 19:10:15.113696   53204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0403 19:10:15.129353   53204 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0403 19:10:15.129377   53204 start.go:495] detecting cgroup driver to use...
	I0403 19:10:15.129440   53204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0403 19:10:15.145436   53204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0403 19:10:15.157775   53204 docker.go:217] disabling cri-docker service (if available) ...
	I0403 19:10:15.157816   53204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0403 19:10:15.169618   53204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0403 19:10:15.181583   53204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0403 19:10:15.292709   53204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0403 19:10:15.440872   53204 docker.go:233] disabling docker service ...
	I0403 19:10:15.440946   53204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0403 19:10:15.454528   53204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0403 19:10:15.466493   53204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0403 19:10:15.580529   53204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0403 19:10:15.696310   53204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0403 19:10:15.709970   53204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0403 19:10:15.726726   53204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0403 19:10:15.726776   53204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:10:15.735692   53204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0403 19:10:15.735739   53204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:10:15.744951   53204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:10:15.753905   53204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:10:15.763357   53204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0403 19:10:15.773106   53204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:10:15.782408   53204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:10:15.797400   53204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:10:15.806862   53204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0403 19:10:15.814999   53204 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0403 19:10:15.815046   53204 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0403 19:10:15.826421   53204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0403 19:10:15.835389   53204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:10:15.948880   53204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0403 19:10:16.032559   53204 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0403 19:10:16.032635   53204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0403 19:10:16.036718   53204 start.go:563] Will wait 60s for crictl version
	I0403 19:10:16.036778   53204 ssh_runner.go:195] Run: which crictl
	I0403 19:10:16.040100   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0403 19:10:16.078893   53204 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0403 19:10:16.078967   53204 ssh_runner.go:195] Run: crio --version
	I0403 19:10:16.104924   53204 ssh_runner.go:195] Run: crio --version
	I0403 19:10:16.131438   53204 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0403 19:10:16.132682   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetIP
	I0403 19:10:16.134954   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:16.135335   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:16.135359   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:16.135564   53204 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0403 19:10:16.139214   53204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:10:16.150243   53204 kubeadm.go:883] updating cluster {Name:test-preload-159739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-159739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0403 19:10:16.150337   53204 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0403 19:10:16.150391   53204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:10:16.182721   53204 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0403 19:10:16.182785   53204 ssh_runner.go:195] Run: which lz4
	I0403 19:10:16.186213   53204 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0403 19:10:16.189811   53204 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0403 19:10:16.189839   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0403 19:10:17.497700   53204 crio.go:462] duration metric: took 1.311509257s to copy over tarball
	I0403 19:10:17.497755   53204 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0403 19:10:19.793204   53204 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.295423623s)
	I0403 19:10:19.793231   53204 crio.go:469] duration metric: took 2.295509247s to extract the tarball
	I0403 19:10:19.793238   53204 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0403 19:10:19.833078   53204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:10:19.873458   53204 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0403 19:10:19.873488   53204 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0403 19:10:19.873578   53204 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:10:19.873589   53204 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0403 19:10:19.873601   53204 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0403 19:10:19.873614   53204 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0403 19:10:19.873604   53204 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0403 19:10:19.873650   53204 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0403 19:10:19.873630   53204 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0403 19:10:19.873649   53204 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0403 19:10:19.875055   53204 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:10:19.875091   53204 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0403 19:10:19.875100   53204 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0403 19:10:19.875055   53204 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0403 19:10:19.875110   53204 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0403 19:10:19.875060   53204 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0403 19:10:19.875134   53204 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0403 19:10:19.875138   53204 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0403 19:10:20.062189   53204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0403 19:10:20.089554   53204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0403 19:10:20.092180   53204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0403 19:10:20.097483   53204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0403 19:10:20.097676   53204 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0403 19:10:20.097724   53204 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0403 19:10:20.097775   53204 ssh_runner.go:195] Run: which crictl
	I0403 19:10:20.103549   53204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0403 19:10:20.127731   53204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0403 19:10:20.132019   53204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0403 19:10:20.198544   53204 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0403 19:10:20.198586   53204 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0403 19:10:20.198622   53204 ssh_runner.go:195] Run: which crictl
	I0403 19:10:20.198651   53204 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0403 19:10:20.198689   53204 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0403 19:10:20.198736   53204 ssh_runner.go:195] Run: which crictl
	I0403 19:10:20.222683   53204 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0403 19:10:20.222732   53204 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0403 19:10:20.222755   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0403 19:10:20.222771   53204 ssh_runner.go:195] Run: which crictl
	I0403 19:10:20.238500   53204 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0403 19:10:20.238543   53204 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0403 19:10:20.238590   53204 ssh_runner.go:195] Run: which crictl
	I0403 19:10:20.239549   53204 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0403 19:10:20.239584   53204 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0403 19:10:20.239621   53204 ssh_runner.go:195] Run: which crictl
	I0403 19:10:20.248358   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0403 19:10:20.248418   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0403 19:10:20.248424   53204 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0403 19:10:20.248456   53204 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0403 19:10:20.248495   53204 ssh_runner.go:195] Run: which crictl
	I0403 19:10:20.277421   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0403 19:10:20.277557   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0403 19:10:20.277677   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0403 19:10:20.277707   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0403 19:10:20.318596   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0403 19:10:20.332414   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0403 19:10:20.332531   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0403 19:10:20.430603   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0403 19:10:20.430684   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0403 19:10:20.430716   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0403 19:10:20.430730   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0403 19:10:20.430766   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0403 19:10:20.485194   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0403 19:10:20.485287   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0403 19:10:20.539206   53204 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0403 19:10:20.539323   53204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0403 19:10:20.539345   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0403 19:10:20.595966   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0403 19:10:20.596009   53204 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0403 19:10:20.596042   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0403 19:10:20.596099   53204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0403 19:10:20.636972   53204 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0403 19:10:20.637061   53204 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0403 19:10:20.637071   53204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0403 19:10:20.637072   53204 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0403 19:10:20.637118   53204 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0403 19:10:20.637133   53204 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0403 19:10:20.637165   53204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0403 19:10:20.637189   53204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0403 19:10:20.690320   53204 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0403 19:10:20.690429   53204 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0403 19:10:20.690540   53204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0403 19:10:20.690554   53204 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0403 19:10:20.690642   53204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0403 19:10:20.708230   53204 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0403 19:10:20.708413   53204 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0403 19:10:20.708525   53204 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0403 19:10:21.315655   53204 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:10:24.049200   53204 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.411991776s)
	I0403 19:10:24.049240   53204 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (3.412033161s)
	I0403 19:10:24.049242   53204 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0403 19:10:24.049261   53204 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0403 19:10:24.049289   53204 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0403 19:10:24.049303   53204 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.358738645s)
	I0403 19:10:24.049320   53204 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (3.358663124s)
	I0403 19:10:24.049333   53204 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0403 19:10:24.049339   53204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0403 19:10:24.049333   53204 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0403 19:10:24.049377   53204 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: (3.340836766s)
	I0403 19:10:24.049392   53204 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0403 19:10:24.049438   53204 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.73375123s)
	I0403 19:10:24.197553   53204 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0403 19:10:24.197596   53204 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0403 19:10:24.197657   53204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0403 19:10:24.637352   53204 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0403 19:10:24.637399   53204 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0403 19:10:24.637444   53204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0403 19:10:25.275671   53204 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0403 19:10:25.275707   53204 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0403 19:10:25.275764   53204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0403 19:10:26.017420   53204 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0403 19:10:26.017469   53204 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0403 19:10:26.017515   53204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0403 19:10:26.453015   53204 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0403 19:10:26.453054   53204 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0403 19:10:26.453092   53204 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0403 19:10:28.400608   53204 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (1.947492594s)
	I0403 19:10:28.400645   53204 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0403 19:10:28.400677   53204 cache_images.go:123] Successfully loaded all cached images
	I0403 19:10:28.400683   53204 cache_images.go:92] duration metric: took 8.527182556s to LoadCachedImages
	I0403 19:10:28.400694   53204 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.24.4 crio true true} ...
	I0403 19:10:28.400810   53204 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-159739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-159739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0403 19:10:28.400902   53204 ssh_runner.go:195] Run: crio config
	I0403 19:10:28.448507   53204 cni.go:84] Creating CNI manager for ""
	I0403 19:10:28.448535   53204 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:10:28.448545   53204 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0403 19:10:28.448565   53204 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-159739 NodeName:test-preload-159739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0403 19:10:28.448692   53204 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-159739"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0403 19:10:28.448765   53204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0403 19:10:28.458264   53204 binaries.go:44] Found k8s binaries, skipping transfer
	I0403 19:10:28.458320   53204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0403 19:10:28.467184   53204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0403 19:10:28.482248   53204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0403 19:10:28.497106   53204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0403 19:10:28.512219   53204 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I0403 19:10:28.515600   53204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:10:28.526687   53204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:10:28.639980   53204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:10:28.656327   53204 certs.go:68] Setting up /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739 for IP: 192.168.39.100
	I0403 19:10:28.656352   53204 certs.go:194] generating shared ca certs ...
	I0403 19:10:28.656382   53204 certs.go:226] acquiring lock for ca certs: {Name:mk34f09724355b828d31eb1ee97771fd3b4645c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:10:28.656563   53204 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key
	I0403 19:10:28.656614   53204 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key
	I0403 19:10:28.656627   53204 certs.go:256] generating profile certs ...
	I0403 19:10:28.656743   53204 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/client.key
	I0403 19:10:28.656835   53204 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/apiserver.key.27c66f69
	I0403 19:10:28.656885   53204 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/proxy-client.key
	I0403 19:10:28.657062   53204 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem (1338 bytes)
	W0403 19:10:28.657103   53204 certs.go:480] ignoring /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552_empty.pem, impossibly tiny 0 bytes
	I0403 19:10:28.657115   53204 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem (1675 bytes)
	I0403 19:10:28.657149   53204 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem (1082 bytes)
	I0403 19:10:28.657179   53204 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem (1123 bytes)
	I0403 19:10:28.657212   53204 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem (1675 bytes)
	I0403 19:10:28.657269   53204 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:10:28.658087   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0403 19:10:28.706259   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0403 19:10:28.749702   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0403 19:10:28.779139   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0403 19:10:28.807325   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0403 19:10:28.840053   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0403 19:10:28.871098   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0403 19:10:28.893994   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0403 19:10:28.915279   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0403 19:10:28.935664   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem --> /usr/share/ca-certificates/21552.pem (1338 bytes)
	I0403 19:10:28.956430   53204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /usr/share/ca-certificates/215522.pem (1708 bytes)
	I0403 19:10:28.976795   53204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0403 19:10:28.991642   53204 ssh_runner.go:195] Run: openssl version
	I0403 19:10:28.996780   53204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0403 19:10:29.006475   53204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:10:29.010378   53204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  3 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:10:29.010422   53204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:10:29.016017   53204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0403 19:10:29.025483   53204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21552.pem && ln -fs /usr/share/ca-certificates/21552.pem /etc/ssl/certs/21552.pem"
	I0403 19:10:29.034958   53204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21552.pem
	I0403 19:10:29.038671   53204 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  3 18:20 /usr/share/ca-certificates/21552.pem
	I0403 19:10:29.038720   53204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21552.pem
	I0403 19:10:29.043828   53204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21552.pem /etc/ssl/certs/51391683.0"
	I0403 19:10:29.053530   53204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215522.pem && ln -fs /usr/share/ca-certificates/215522.pem /etc/ssl/certs/215522.pem"
	I0403 19:10:29.063139   53204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215522.pem
	I0403 19:10:29.066998   53204 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  3 18:20 /usr/share/ca-certificates/215522.pem
	I0403 19:10:29.067027   53204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215522.pem
	I0403 19:10:29.071994   53204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215522.pem /etc/ssl/certs/3ec20f2e.0"
	I0403 19:10:29.081751   53204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0403 19:10:29.085847   53204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0403 19:10:29.091342   53204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0403 19:10:29.096671   53204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0403 19:10:29.102436   53204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0403 19:10:29.108054   53204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0403 19:10:29.113904   53204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0403 19:10:29.119273   53204 kubeadm.go:392] StartCluster: {Name:test-preload-159739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
159739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:10:29.119362   53204 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0403 19:10:29.119419   53204 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0403 19:10:29.154204   53204 cri.go:89] found id: ""
	I0403 19:10:29.154265   53204 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0403 19:10:29.163921   53204 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0403 19:10:29.163942   53204 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0403 19:10:29.163988   53204 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0403 19:10:29.172946   53204 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0403 19:10:29.173359   53204 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-159739" does not appear in /home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:10:29.173514   53204 kubeconfig.go:62] /home/jenkins/minikube-integration/20591-14371/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-159739" cluster setting kubeconfig missing "test-preload-159739" context setting]
	I0403 19:10:29.173773   53204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/kubeconfig: {Name:mk73cf36c30cb8628f68d205e8cb0818a9975b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:10:29.174252   53204 kapi.go:59] client config for test-preload-159739: &rest.Config{Host:"https://192.168.39.100:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/client.crt", KeyFile:"/home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/client.key", CAFile:"/home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0403 19:10:29.174644   53204 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0403 19:10:29.174657   53204 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0403 19:10:29.174669   53204 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0403 19:10:29.174675   53204 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0403 19:10:29.174997   53204 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0403 19:10:29.183405   53204 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.100
	I0403 19:10:29.183436   53204 kubeadm.go:1160] stopping kube-system containers ...
	I0403 19:10:29.183447   53204 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0403 19:10:29.183492   53204 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0403 19:10:29.224527   53204 cri.go:89] found id: ""
	I0403 19:10:29.224602   53204 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0403 19:10:29.241006   53204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:10:29.250149   53204 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 19:10:29.250179   53204 kubeadm.go:157] found existing configuration files:
	
	I0403 19:10:29.250214   53204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:10:29.258807   53204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 19:10:29.258867   53204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 19:10:29.267733   53204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:10:29.276057   53204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 19:10:29.276096   53204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 19:10:29.284524   53204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:10:29.292658   53204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 19:10:29.292711   53204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:10:29.301058   53204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:10:29.309130   53204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 19:10:29.309166   53204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:10:29.317754   53204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0403 19:10:29.326495   53204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:10:29.416835   53204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:10:30.083197   53204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:10:30.340257   53204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:10:30.410250   53204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:10:30.471593   53204 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:10:30.471656   53204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:10:30.971774   53204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:10:31.472494   53204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:10:31.494786   53204 api_server.go:72] duration metric: took 1.023187923s to wait for apiserver process to appear ...
	I0403 19:10:31.494813   53204 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:10:31.494849   53204 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0403 19:10:31.495384   53204 api_server.go:269] stopped: https://192.168.39.100:8443/healthz: Get "https://192.168.39.100:8443/healthz": dial tcp 192.168.39.100:8443: connect: connection refused
	I0403 19:10:31.994968   53204 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0403 19:10:35.350656   53204 api_server.go:279] https://192.168.39.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0403 19:10:35.350686   53204 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0403 19:10:35.350709   53204 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0403 19:10:35.395626   53204 api_server.go:279] https://192.168.39.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0403 19:10:35.395667   53204 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0403 19:10:35.494885   53204 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0403 19:10:35.507574   53204 api_server.go:279] https://192.168.39.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0403 19:10:35.507614   53204 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0403 19:10:35.994896   53204 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0403 19:10:36.001653   53204 api_server.go:279] https://192.168.39.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0403 19:10:36.001696   53204 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0403 19:10:36.495375   53204 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0403 19:10:36.501036   53204 api_server.go:279] https://192.168.39.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0403 19:10:36.501068   53204 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0403 19:10:36.994926   53204 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0403 19:10:37.004092   53204 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0403 19:10:37.010048   53204 api_server.go:141] control plane version: v1.24.4
	I0403 19:10:37.010071   53204 api_server.go:131] duration metric: took 5.515252961s to wait for apiserver health ...
	I0403 19:10:37.010079   53204 cni.go:84] Creating CNI manager for ""
	I0403 19:10:37.010085   53204 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:10:37.011778   53204 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0403 19:10:37.013033   53204 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0403 19:10:37.023102   53204 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0403 19:10:37.039801   53204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:10:37.044124   53204 system_pods.go:59] 7 kube-system pods found
	I0403 19:10:37.044190   53204 system_pods.go:61] "coredns-6d4b75cb6d-rmrqx" [e4a5aca5-34eb-47d9-b741-87120e3b7cdc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0403 19:10:37.044206   53204 system_pods.go:61] "etcd-test-preload-159739" [1d90d997-559b-4b70-a593-52cad0bdab96] Running
	I0403 19:10:37.044214   53204 system_pods.go:61] "kube-apiserver-test-preload-159739" [15924497-297a-4e27-81aa-ecfa466c3319] Running
	I0403 19:10:37.044230   53204 system_pods.go:61] "kube-controller-manager-test-preload-159739" [e7995162-ce2e-41c5-b010-8042757b15e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0403 19:10:37.044241   53204 system_pods.go:61] "kube-proxy-m8jxc" [895a2584-30e0-498f-8e86-28309568569e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0403 19:10:37.044250   53204 system_pods.go:61] "kube-scheduler-test-preload-159739" [32fc2343-412c-4a70-b293-9eddd7f61007] Running
	I0403 19:10:37.044260   53204 system_pods.go:61] "storage-provisioner" [83a99ecf-5581-479d-acdf-f495a52e0b43] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0403 19:10:37.044271   53204 system_pods.go:74] duration metric: took 4.450766ms to wait for pod list to return data ...
	I0403 19:10:37.044284   53204 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:10:37.046391   53204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:10:37.046411   53204 node_conditions.go:123] node cpu capacity is 2
	I0403 19:10:37.046421   53204 node_conditions.go:105] duration metric: took 2.132034ms to run NodePressure ...
	I0403 19:10:37.046439   53204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:10:37.235187   53204 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0403 19:10:37.238039   53204 kubeadm.go:739] kubelet initialised
	I0403 19:10:37.238063   53204 kubeadm.go:740] duration metric: took 2.845111ms waiting for restarted kubelet to initialise ...
	I0403 19:10:37.238073   53204 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:10:37.240711   53204 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-rmrqx" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:37.243892   53204 pod_ready.go:98] node "test-preload-159739" hosting pod "coredns-6d4b75cb6d-rmrqx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:37.243916   53204 pod_ready.go:82] duration metric: took 3.18308ms for pod "coredns-6d4b75cb6d-rmrqx" in "kube-system" namespace to be "Ready" ...
	E0403 19:10:37.243927   53204 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-159739" hosting pod "coredns-6d4b75cb6d-rmrqx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:37.243936   53204 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:37.247003   53204 pod_ready.go:98] node "test-preload-159739" hosting pod "etcd-test-preload-159739" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:37.247037   53204 pod_ready.go:82] duration metric: took 3.075939ms for pod "etcd-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	E0403 19:10:37.247051   53204 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-159739" hosting pod "etcd-test-preload-159739" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:37.247069   53204 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:37.251628   53204 pod_ready.go:98] node "test-preload-159739" hosting pod "kube-apiserver-test-preload-159739" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:37.251649   53204 pod_ready.go:82] duration metric: took 4.567008ms for pod "kube-apiserver-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	E0403 19:10:37.251659   53204 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-159739" hosting pod "kube-apiserver-test-preload-159739" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:37.251668   53204 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:37.443989   53204 pod_ready.go:98] node "test-preload-159739" hosting pod "kube-controller-manager-test-preload-159739" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:37.444027   53204 pod_ready.go:82] duration metric: took 192.344717ms for pod "kube-controller-manager-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	E0403 19:10:37.444040   53204 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-159739" hosting pod "kube-controller-manager-test-preload-159739" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:37.444049   53204 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-m8jxc" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:37.843389   53204 pod_ready.go:98] node "test-preload-159739" hosting pod "kube-proxy-m8jxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:37.843415   53204 pod_ready.go:82] duration metric: took 399.356576ms for pod "kube-proxy-m8jxc" in "kube-system" namespace to be "Ready" ...
	E0403 19:10:37.843424   53204 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-159739" hosting pod "kube-proxy-m8jxc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:37.843441   53204 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:38.243724   53204 pod_ready.go:98] node "test-preload-159739" hosting pod "kube-scheduler-test-preload-159739" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:38.243751   53204 pod_ready.go:82] duration metric: took 400.303536ms for pod "kube-scheduler-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	E0403 19:10:38.243759   53204 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-159739" hosting pod "kube-scheduler-test-preload-159739" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:38.243767   53204 pod_ready.go:39] duration metric: took 1.00567747s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:10:38.243785   53204 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0403 19:10:38.257614   53204 ops.go:34] apiserver oom_adj: -16
	I0403 19:10:38.257636   53204 kubeadm.go:597] duration metric: took 9.093686815s to restartPrimaryControlPlane
	I0403 19:10:38.257646   53204 kubeadm.go:394] duration metric: took 9.138380603s to StartCluster
	I0403 19:10:38.257666   53204 settings.go:142] acquiring lock: {Name:mk92384ef10350a2ea8b1710e0b74c72a0214398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:10:38.257738   53204 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:10:38.258319   53204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/kubeconfig: {Name:mk73cf36c30cb8628f68d205e8cb0818a9975b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:10:38.258554   53204 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:10:38.258656   53204 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0403 19:10:38.258725   53204 config.go:182] Loaded profile config "test-preload-159739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0403 19:10:38.258731   53204 addons.go:69] Setting storage-provisioner=true in profile "test-preload-159739"
	I0403 19:10:38.258755   53204 addons.go:238] Setting addon storage-provisioner=true in "test-preload-159739"
	W0403 19:10:38.258769   53204 addons.go:247] addon storage-provisioner should already be in state true
	I0403 19:10:38.258800   53204 host.go:66] Checking if "test-preload-159739" exists ...
	I0403 19:10:38.258754   53204 addons.go:69] Setting default-storageclass=true in profile "test-preload-159739"
	I0403 19:10:38.258875   53204 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-159739"
	I0403 19:10:38.259278   53204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:10:38.259278   53204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:10:38.259320   53204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:10:38.259327   53204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:10:38.260102   53204 out.go:177] * Verifying Kubernetes components...
	I0403 19:10:38.261210   53204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:10:38.274877   53204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0403 19:10:38.275469   53204 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:10:38.276033   53204 main.go:141] libmachine: Using API Version  1
	I0403 19:10:38.276059   53204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:10:38.276448   53204 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:10:38.277022   53204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:10:38.277068   53204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:10:38.277862   53204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0403 19:10:38.278234   53204 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:10:38.278594   53204 main.go:141] libmachine: Using API Version  1
	I0403 19:10:38.278613   53204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:10:38.278945   53204 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:10:38.279134   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetState
	I0403 19:10:38.281301   53204 kapi.go:59] client config for test-preload-159739: &rest.Config{Host:"https://192.168.39.100:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/client.crt", KeyFile:"/home/jenkins/minikube-integration/20591-14371/.minikube/profiles/test-preload-159739/client.key", CAFile:"/home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0403 19:10:38.281644   53204 addons.go:238] Setting addon default-storageclass=true in "test-preload-159739"
	W0403 19:10:38.281663   53204 addons.go:247] addon default-storageclass should already be in state true
	I0403 19:10:38.281690   53204 host.go:66] Checking if "test-preload-159739" exists ...
	I0403 19:10:38.282047   53204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:10:38.282090   53204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:10:38.292154   53204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0403 19:10:38.292634   53204 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:10:38.293162   53204 main.go:141] libmachine: Using API Version  1
	I0403 19:10:38.293195   53204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:10:38.293586   53204 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:10:38.293761   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetState
	I0403 19:10:38.295177   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	I0403 19:10:38.297077   53204 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:10:38.297250   53204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I0403 19:10:38.297605   53204 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:10:38.297997   53204 main.go:141] libmachine: Using API Version  1
	I0403 19:10:38.298023   53204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:10:38.298233   53204 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:10:38.298249   53204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0403 19:10:38.298262   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:38.298366   53204 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:10:38.298958   53204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:10:38.299007   53204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:10:38.300610   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:38.300936   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:38.300950   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:38.301163   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHPort
	I0403 19:10:38.301316   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:38.301429   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHUsername
	I0403 19:10:38.301535   53204 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/test-preload-159739/id_rsa Username:docker}
	I0403 19:10:38.346782   53204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39021
	I0403 19:10:38.347280   53204 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:10:38.347893   53204 main.go:141] libmachine: Using API Version  1
	I0403 19:10:38.347913   53204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:10:38.348403   53204 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:10:38.348587   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetState
	I0403 19:10:38.350144   53204 main.go:141] libmachine: (test-preload-159739) Calling .DriverName
	I0403 19:10:38.350335   53204 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0403 19:10:38.350350   53204 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0403 19:10:38.350378   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHHostname
	I0403 19:10:38.353158   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:38.353561   53204 main.go:141] libmachine: (test-preload-159739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2e:ad", ip: ""} in network mk-test-preload-159739: {Iface:virbr1 ExpiryTime:2025-04-03 20:10:07 +0000 UTC Type:0 Mac:52:54:00:31:2e:ad Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-159739 Clientid:01:52:54:00:31:2e:ad}
	I0403 19:10:38.353586   53204 main.go:141] libmachine: (test-preload-159739) DBG | domain test-preload-159739 has defined IP address 192.168.39.100 and MAC address 52:54:00:31:2e:ad in network mk-test-preload-159739
	I0403 19:10:38.353754   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHPort
	I0403 19:10:38.353910   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHKeyPath
	I0403 19:10:38.354066   53204 main.go:141] libmachine: (test-preload-159739) Calling .GetSSHUsername
	I0403 19:10:38.354181   53204 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/test-preload-159739/id_rsa Username:docker}
	I0403 19:10:38.461416   53204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:10:38.481748   53204 node_ready.go:35] waiting up to 6m0s for node "test-preload-159739" to be "Ready" ...
	I0403 19:10:38.533209   53204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0403 19:10:38.612691   53204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:10:39.477496   53204 main.go:141] libmachine: Making call to close driver server
	I0403 19:10:39.477523   53204 main.go:141] libmachine: (test-preload-159739) Calling .Close
	I0403 19:10:39.477579   53204 main.go:141] libmachine: Making call to close driver server
	I0403 19:10:39.477599   53204 main.go:141] libmachine: (test-preload-159739) Calling .Close
	I0403 19:10:39.477812   53204 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:10:39.477827   53204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:10:39.477836   53204 main.go:141] libmachine: Making call to close driver server
	I0403 19:10:39.477840   53204 main.go:141] libmachine: (test-preload-159739) DBG | Closing plugin on server side
	I0403 19:10:39.477843   53204 main.go:141] libmachine: (test-preload-159739) Calling .Close
	I0403 19:10:39.477862   53204 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:10:39.477869   53204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:10:39.477876   53204 main.go:141] libmachine: Making call to close driver server
	I0403 19:10:39.477882   53204 main.go:141] libmachine: (test-preload-159739) Calling .Close
	I0403 19:10:39.478045   53204 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:10:39.478061   53204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:10:39.478119   53204 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:10:39.478132   53204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:10:39.478165   53204 main.go:141] libmachine: (test-preload-159739) DBG | Closing plugin on server side
	I0403 19:10:39.482989   53204 main.go:141] libmachine: Making call to close driver server
	I0403 19:10:39.483005   53204 main.go:141] libmachine: (test-preload-159739) Calling .Close
	I0403 19:10:39.483229   53204 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:10:39.483248   53204 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:10:39.483248   53204 main.go:141] libmachine: (test-preload-159739) DBG | Closing plugin on server side
	I0403 19:10:39.484797   53204 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0403 19:10:39.486073   53204 addons.go:514] duration metric: took 1.227422463s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0403 19:10:40.485472   53204 node_ready.go:53] node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:42.985787   53204 node_ready.go:53] node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:45.485809   53204 node_ready.go:53] node "test-preload-159739" has status "Ready":"False"
	I0403 19:10:45.985303   53204 node_ready.go:49] node "test-preload-159739" has status "Ready":"True"
	I0403 19:10:45.985324   53204 node_ready.go:38] duration metric: took 7.503545163s for node "test-preload-159739" to be "Ready" ...
	I0403 19:10:45.985332   53204 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:10:45.988831   53204 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-rmrqx" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:45.992223   53204 pod_ready.go:93] pod "coredns-6d4b75cb6d-rmrqx" in "kube-system" namespace has status "Ready":"True"
	I0403 19:10:45.992248   53204 pod_ready.go:82] duration metric: took 3.392364ms for pod "coredns-6d4b75cb6d-rmrqx" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:45.992259   53204 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:47.998951   53204 pod_ready.go:103] pod "etcd-test-preload-159739" in "kube-system" namespace has status "Ready":"False"
	I0403 19:10:49.999281   53204 pod_ready.go:103] pod "etcd-test-preload-159739" in "kube-system" namespace has status "Ready":"False"
	I0403 19:10:50.997467   53204 pod_ready.go:93] pod "etcd-test-preload-159739" in "kube-system" namespace has status "Ready":"True"
	I0403 19:10:50.997491   53204 pod_ready.go:82] duration metric: took 5.005226014s for pod "etcd-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:50.997500   53204 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:51.001887   53204 pod_ready.go:93] pod "kube-apiserver-test-preload-159739" in "kube-system" namespace has status "Ready":"True"
	I0403 19:10:51.001907   53204 pod_ready.go:82] duration metric: took 4.401124ms for pod "kube-apiserver-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:51.001914   53204 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:51.005317   53204 pod_ready.go:93] pod "kube-controller-manager-test-preload-159739" in "kube-system" namespace has status "Ready":"True"
	I0403 19:10:51.005331   53204 pod_ready.go:82] duration metric: took 3.411136ms for pod "kube-controller-manager-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:51.005340   53204 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m8jxc" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:51.008358   53204 pod_ready.go:93] pod "kube-proxy-m8jxc" in "kube-system" namespace has status "Ready":"True"
	I0403 19:10:51.008372   53204 pod_ready.go:82] duration metric: took 3.026759ms for pod "kube-proxy-m8jxc" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:51.008379   53204 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:51.011468   53204 pod_ready.go:93] pod "kube-scheduler-test-preload-159739" in "kube-system" namespace has status "Ready":"True"
	I0403 19:10:51.011482   53204 pod_ready.go:82] duration metric: took 3.098312ms for pod "kube-scheduler-test-preload-159739" in "kube-system" namespace to be "Ready" ...
	I0403 19:10:51.011490   53204 pod_ready.go:39] duration metric: took 5.026148139s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:10:51.011502   53204 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:10:51.011552   53204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:10:51.026661   53204 api_server.go:72] duration metric: took 12.768076791s to wait for apiserver process to appear ...
	I0403 19:10:51.026684   53204 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:10:51.026698   53204 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0403 19:10:51.031116   53204 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0403 19:10:51.031998   53204 api_server.go:141] control plane version: v1.24.4
	I0403 19:10:51.032014   53204 api_server.go:131] duration metric: took 5.324364ms to wait for apiserver health ...
	I0403 19:10:51.032021   53204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:10:51.197790   53204 system_pods.go:59] 7 kube-system pods found
	I0403 19:10:51.197817   53204 system_pods.go:61] "coredns-6d4b75cb6d-rmrqx" [e4a5aca5-34eb-47d9-b741-87120e3b7cdc] Running
	I0403 19:10:51.197822   53204 system_pods.go:61] "etcd-test-preload-159739" [1d90d997-559b-4b70-a593-52cad0bdab96] Running
	I0403 19:10:51.197826   53204 system_pods.go:61] "kube-apiserver-test-preload-159739" [15924497-297a-4e27-81aa-ecfa466c3319] Running
	I0403 19:10:51.197830   53204 system_pods.go:61] "kube-controller-manager-test-preload-159739" [e7995162-ce2e-41c5-b010-8042757b15e2] Running
	I0403 19:10:51.197834   53204 system_pods.go:61] "kube-proxy-m8jxc" [895a2584-30e0-498f-8e86-28309568569e] Running
	I0403 19:10:51.197837   53204 system_pods.go:61] "kube-scheduler-test-preload-159739" [32fc2343-412c-4a70-b293-9eddd7f61007] Running
	I0403 19:10:51.197840   53204 system_pods.go:61] "storage-provisioner" [83a99ecf-5581-479d-acdf-f495a52e0b43] Running
	I0403 19:10:51.197853   53204 system_pods.go:74] duration metric: took 165.820702ms to wait for pod list to return data ...
	I0403 19:10:51.197862   53204 default_sa.go:34] waiting for default service account to be created ...
	I0403 19:10:51.395900   53204 default_sa.go:45] found service account: "default"
	I0403 19:10:51.395923   53204 default_sa.go:55] duration metric: took 198.050766ms for default service account to be created ...
	I0403 19:10:51.395932   53204 system_pods.go:116] waiting for k8s-apps to be running ...
	I0403 19:10:51.597418   53204 system_pods.go:86] 7 kube-system pods found
	I0403 19:10:51.597444   53204 system_pods.go:89] "coredns-6d4b75cb6d-rmrqx" [e4a5aca5-34eb-47d9-b741-87120e3b7cdc] Running
	I0403 19:10:51.597449   53204 system_pods.go:89] "etcd-test-preload-159739" [1d90d997-559b-4b70-a593-52cad0bdab96] Running
	I0403 19:10:51.597453   53204 system_pods.go:89] "kube-apiserver-test-preload-159739" [15924497-297a-4e27-81aa-ecfa466c3319] Running
	I0403 19:10:51.597462   53204 system_pods.go:89] "kube-controller-manager-test-preload-159739" [e7995162-ce2e-41c5-b010-8042757b15e2] Running
	I0403 19:10:51.597466   53204 system_pods.go:89] "kube-proxy-m8jxc" [895a2584-30e0-498f-8e86-28309568569e] Running
	I0403 19:10:51.597471   53204 system_pods.go:89] "kube-scheduler-test-preload-159739" [32fc2343-412c-4a70-b293-9eddd7f61007] Running
	I0403 19:10:51.597476   53204 system_pods.go:89] "storage-provisioner" [83a99ecf-5581-479d-acdf-f495a52e0b43] Running
	I0403 19:10:51.597484   53204 system_pods.go:126] duration metric: took 201.546434ms to wait for k8s-apps to be running ...
	I0403 19:10:51.597492   53204 system_svc.go:44] waiting for kubelet service to be running ....
	I0403 19:10:51.597557   53204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:10:51.612828   53204 system_svc.go:56] duration metric: took 15.325681ms WaitForService to wait for kubelet
	I0403 19:10:51.612859   53204 kubeadm.go:582] duration metric: took 13.354279812s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:10:51.612875   53204 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:10:51.797237   53204 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:10:51.797268   53204 node_conditions.go:123] node cpu capacity is 2
	I0403 19:10:51.797279   53204 node_conditions.go:105] duration metric: took 184.399991ms to run NodePressure ...
	I0403 19:10:51.797290   53204 start.go:241] waiting for startup goroutines ...
	I0403 19:10:51.797296   53204 start.go:246] waiting for cluster config update ...
	I0403 19:10:51.797306   53204 start.go:255] writing updated cluster config ...
	I0403 19:10:51.797552   53204 ssh_runner.go:195] Run: rm -f paused
	I0403 19:10:51.844916   53204 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0403 19:10:51.847072   53204 out.go:201] 
	W0403 19:10:51.848287   53204 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0403 19:10:51.849477   53204 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0403 19:10:51.850619   53204 out.go:177] * Done! kubectl is now configured to use "test-preload-159739" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.692117924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707452692095892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05d5f5b0-1bde-4135-a286-914d45af861a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.692691714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d98c85e0-c540-44e7-bdd2-8fc9ae29497e name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.692742900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d98c85e0-c540-44e7-bdd2-8fc9ae29497e name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.692911217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d898eda8ba0a424294b4a0639d41b38b3aba6c11f51f8fb24e0a7e8fb1904d27,PodSandboxId:8009c4076cb6d58e76ff18892bed574435181d67d9d09b9fa03f0ebbf2589be5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1743707443548169738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rmrqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4a5aca5-34eb-47d9-b741-87120e3b7cdc,},Annotations:map[string]string{io.kubernetes.container.hash: f5326af4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b432c291889602d2621be9d44b15f13d6b6f7d563a39a7a1d0dcb5df5d1e32,PodSandboxId:919bf8acba1528010f47bd4c092d07dcc299f9b3228a0678613f79678c22897b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1743707436731616428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m8jxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 895a2584-30e0-498f-8e86-28309568569e,},Annotations:map[string]string{io.kubernetes.container.hash: ee483b7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92d37d3584b72cfe3c051d235ccf3a9d3c6a511002238c8574bc145a53b9d0c1,PodSandboxId:5079226fc23852919527800c39845766190658de64759cf68ee6903784e81373,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743707436451880096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83
a99ecf-5581-479d-acdf-f495a52e0b43,},Annotations:map[string]string{io.kubernetes.container.hash: cefbe641,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dac06ee43dd413b7ed6b6caa7d845a051ef105e521f1a86eaa6ac5dcd96b7fa,PodSandboxId:2a4d05886d4c8de815a42c9326a6695ef93d102a79565c3175710ef1c051093e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1743707431173177215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6bd761fc7a2238ee2190c939f5cc08b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2eee5c73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a07db1d245c3406708ca093bb5f04db756d624b17f4e2249cb676a90cd131e4,PodSandboxId:8926b8c1a48e408d757c6eb8e5098c88c02ef6fcaafae56068d9de6eb05d0422,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1743707431126852810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f6eb32c79515b02faaa8bb7c4a5e17a,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea4b09c95187874ae33331f9018aee2ffdcd8fa85fd20918796069a1d300d67,PodSandboxId:05b0d0785dc20e405cd2f82ed4d8394215414d47401578430e19f12668b54557,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1743707431148780138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95c0f4cc638ad564a9d751e153bd224,},Annotations:map[string]str
ing{io.kubernetes.container.hash: cca4ed8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c4ad08be99306c61f183438981abda4ed4f81f5e9834d0cb6c6e575993a7ec,PodSandboxId:220d0d43198ae3cd656f090e84b63e97a1629518265f5aa49514264257de3ca9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1743707431140165529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274963dd5cb9232944b5364387b393ae,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d98c85e0-c540-44e7-bdd2-8fc9ae29497e name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.726191624Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c9a16e7-8bb3-4b61-bc46-f37112b1f451 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.726275846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c9a16e7-8bb3-4b61-bc46-f37112b1f451 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.727346464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca212844-5bc5-46a0-99d5-9fd085e237a5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.727764675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707452727744985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca212844-5bc5-46a0-99d5-9fd085e237a5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.728342785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6f8c734-74e5-4c3f-8ea1-0e0607b94354 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.728406105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6f8c734-74e5-4c3f-8ea1-0e0607b94354 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.728638268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d898eda8ba0a424294b4a0639d41b38b3aba6c11f51f8fb24e0a7e8fb1904d27,PodSandboxId:8009c4076cb6d58e76ff18892bed574435181d67d9d09b9fa03f0ebbf2589be5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1743707443548169738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rmrqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4a5aca5-34eb-47d9-b741-87120e3b7cdc,},Annotations:map[string]string{io.kubernetes.container.hash: f5326af4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b432c291889602d2621be9d44b15f13d6b6f7d563a39a7a1d0dcb5df5d1e32,PodSandboxId:919bf8acba1528010f47bd4c092d07dcc299f9b3228a0678613f79678c22897b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1743707436731616428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m8jxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 895a2584-30e0-498f-8e86-28309568569e,},Annotations:map[string]string{io.kubernetes.container.hash: ee483b7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92d37d3584b72cfe3c051d235ccf3a9d3c6a511002238c8574bc145a53b9d0c1,PodSandboxId:5079226fc23852919527800c39845766190658de64759cf68ee6903784e81373,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743707436451880096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83
a99ecf-5581-479d-acdf-f495a52e0b43,},Annotations:map[string]string{io.kubernetes.container.hash: cefbe641,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dac06ee43dd413b7ed6b6caa7d845a051ef105e521f1a86eaa6ac5dcd96b7fa,PodSandboxId:2a4d05886d4c8de815a42c9326a6695ef93d102a79565c3175710ef1c051093e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1743707431173177215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6bd761fc7a2238ee2190c939f5cc08b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2eee5c73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a07db1d245c3406708ca093bb5f04db756d624b17f4e2249cb676a90cd131e4,PodSandboxId:8926b8c1a48e408d757c6eb8e5098c88c02ef6fcaafae56068d9de6eb05d0422,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1743707431126852810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f6eb32c79515b02faaa8bb7c4a5e17a,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea4b09c95187874ae33331f9018aee2ffdcd8fa85fd20918796069a1d300d67,PodSandboxId:05b0d0785dc20e405cd2f82ed4d8394215414d47401578430e19f12668b54557,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1743707431148780138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95c0f4cc638ad564a9d751e153bd224,},Annotations:map[string]str
ing{io.kubernetes.container.hash: cca4ed8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c4ad08be99306c61f183438981abda4ed4f81f5e9834d0cb6c6e575993a7ec,PodSandboxId:220d0d43198ae3cd656f090e84b63e97a1629518265f5aa49514264257de3ca9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1743707431140165529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274963dd5cb9232944b5364387b393ae,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6f8c734-74e5-4c3f-8ea1-0e0607b94354 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.762605060Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=719d044f-2b64-491f-bec3-595f56a7b46f name=/runtime.v1.RuntimeService/Version
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.762689832Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=719d044f-2b64-491f-bec3-595f56a7b46f name=/runtime.v1.RuntimeService/Version
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.763879128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2f4f6df-a766-4c65-968c-d9ecdc82d2d8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.764428821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707452764405782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2f4f6df-a766-4c65-968c-d9ecdc82d2d8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.765009565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adb3c6d3-188d-46a9-8657-a21c08add303 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.765068695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adb3c6d3-188d-46a9-8657-a21c08add303 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.765222909Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d898eda8ba0a424294b4a0639d41b38b3aba6c11f51f8fb24e0a7e8fb1904d27,PodSandboxId:8009c4076cb6d58e76ff18892bed574435181d67d9d09b9fa03f0ebbf2589be5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1743707443548169738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rmrqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4a5aca5-34eb-47d9-b741-87120e3b7cdc,},Annotations:map[string]string{io.kubernetes.container.hash: f5326af4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b432c291889602d2621be9d44b15f13d6b6f7d563a39a7a1d0dcb5df5d1e32,PodSandboxId:919bf8acba1528010f47bd4c092d07dcc299f9b3228a0678613f79678c22897b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1743707436731616428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m8jxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 895a2584-30e0-498f-8e86-28309568569e,},Annotations:map[string]string{io.kubernetes.container.hash: ee483b7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92d37d3584b72cfe3c051d235ccf3a9d3c6a511002238c8574bc145a53b9d0c1,PodSandboxId:5079226fc23852919527800c39845766190658de64759cf68ee6903784e81373,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743707436451880096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83
a99ecf-5581-479d-acdf-f495a52e0b43,},Annotations:map[string]string{io.kubernetes.container.hash: cefbe641,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dac06ee43dd413b7ed6b6caa7d845a051ef105e521f1a86eaa6ac5dcd96b7fa,PodSandboxId:2a4d05886d4c8de815a42c9326a6695ef93d102a79565c3175710ef1c051093e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1743707431173177215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6bd761fc7a2238ee2190c939f5cc08b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2eee5c73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a07db1d245c3406708ca093bb5f04db756d624b17f4e2249cb676a90cd131e4,PodSandboxId:8926b8c1a48e408d757c6eb8e5098c88c02ef6fcaafae56068d9de6eb05d0422,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1743707431126852810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f6eb32c79515b02faaa8bb7c4a5e17a,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea4b09c95187874ae33331f9018aee2ffdcd8fa85fd20918796069a1d300d67,PodSandboxId:05b0d0785dc20e405cd2f82ed4d8394215414d47401578430e19f12668b54557,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1743707431148780138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95c0f4cc638ad564a9d751e153bd224,},Annotations:map[string]str
ing{io.kubernetes.container.hash: cca4ed8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c4ad08be99306c61f183438981abda4ed4f81f5e9834d0cb6c6e575993a7ec,PodSandboxId:220d0d43198ae3cd656f090e84b63e97a1629518265f5aa49514264257de3ca9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1743707431140165529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274963dd5cb9232944b5364387b393ae,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adb3c6d3-188d-46a9-8657-a21c08add303 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.794725741Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eef89c82-2b86-4b9c-8bd2-9b721cf8cea7 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.794811611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eef89c82-2b86-4b9c-8bd2-9b721cf8cea7 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.795728945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aacdd03e-3c03-4373-81b6-524cc45a7c06 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.796377842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707452796353320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aacdd03e-3c03-4373-81b6-524cc45a7c06 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.796871265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=573d7f61-4cd6-43a2-8afa-e3e1543af0aa name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.796931989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=573d7f61-4cd6-43a2-8afa-e3e1543af0aa name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:10:52 test-preload-159739 crio[662]: time="2025-04-03 19:10:52.797127534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d898eda8ba0a424294b4a0639d41b38b3aba6c11f51f8fb24e0a7e8fb1904d27,PodSandboxId:8009c4076cb6d58e76ff18892bed574435181d67d9d09b9fa03f0ebbf2589be5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1743707443548169738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rmrqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4a5aca5-34eb-47d9-b741-87120e3b7cdc,},Annotations:map[string]string{io.kubernetes.container.hash: f5326af4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b432c291889602d2621be9d44b15f13d6b6f7d563a39a7a1d0dcb5df5d1e32,PodSandboxId:919bf8acba1528010f47bd4c092d07dcc299f9b3228a0678613f79678c22897b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1743707436731616428,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m8jxc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 895a2584-30e0-498f-8e86-28309568569e,},Annotations:map[string]string{io.kubernetes.container.hash: ee483b7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92d37d3584b72cfe3c051d235ccf3a9d3c6a511002238c8574bc145a53b9d0c1,PodSandboxId:5079226fc23852919527800c39845766190658de64759cf68ee6903784e81373,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743707436451880096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83
a99ecf-5581-479d-acdf-f495a52e0b43,},Annotations:map[string]string{io.kubernetes.container.hash: cefbe641,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dac06ee43dd413b7ed6b6caa7d845a051ef105e521f1a86eaa6ac5dcd96b7fa,PodSandboxId:2a4d05886d4c8de815a42c9326a6695ef93d102a79565c3175710ef1c051093e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1743707431173177215,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6bd761fc7a2238ee2190c939f5cc08b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2eee5c73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a07db1d245c3406708ca093bb5f04db756d624b17f4e2249cb676a90cd131e4,PodSandboxId:8926b8c1a48e408d757c6eb8e5098c88c02ef6fcaafae56068d9de6eb05d0422,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1743707431126852810,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f6eb32c79515b02faaa8bb7c4a5e17a,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ea4b09c95187874ae33331f9018aee2ffdcd8fa85fd20918796069a1d300d67,PodSandboxId:05b0d0785dc20e405cd2f82ed4d8394215414d47401578430e19f12668b54557,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1743707431148780138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e95c0f4cc638ad564a9d751e153bd224,},Annotations:map[string]str
ing{io.kubernetes.container.hash: cca4ed8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c4ad08be99306c61f183438981abda4ed4f81f5e9834d0cb6c6e575993a7ec,PodSandboxId:220d0d43198ae3cd656f090e84b63e97a1629518265f5aa49514264257de3ca9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1743707431140165529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-159739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274963dd5cb9232944b5364387b393ae,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=573d7f61-4cd6-43a2-8afa-e3e1543af0aa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d898eda8ba0a4       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   8009c4076cb6d       coredns-6d4b75cb6d-rmrqx
	23b432c291889       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   919bf8acba152       kube-proxy-m8jxc
	92d37d3584b72       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   5079226fc2385       storage-provisioner
	9dac06ee43dd4       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   2a4d05886d4c8       etcd-test-preload-159739
	3ea4b09c95187       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   05b0d0785dc20       kube-apiserver-test-preload-159739
	f5c4ad08be993       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   220d0d43198ae       kube-controller-manager-test-preload-159739
	6a07db1d245c3       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   8926b8c1a48e4       kube-scheduler-test-preload-159739
	
	
	==> coredns [d898eda8ba0a424294b4a0639d41b38b3aba6c11f51f8fb24e0a7e8fb1904d27] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:38297 - 56446 "HINFO IN 2736456742764431037.2640476978216992809. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010102413s
	
	
	==> describe nodes <==
	Name:               test-preload-159739
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-159739
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053
	                    minikube.k8s.io/name=test-preload-159739
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_03T19_08_28_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 03 Apr 2025 19:08:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-159739
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 03 Apr 2025 19:10:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 03 Apr 2025 19:10:45 +0000   Thu, 03 Apr 2025 19:08:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 03 Apr 2025 19:10:45 +0000   Thu, 03 Apr 2025 19:08:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 03 Apr 2025 19:10:45 +0000   Thu, 03 Apr 2025 19:08:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 03 Apr 2025 19:10:45 +0000   Thu, 03 Apr 2025 19:10:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    test-preload-159739
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dab5f446cc9841878a41b517f864b310
	  System UUID:                dab5f446-cc98-4187-8a41-b517f864b310
	  Boot ID:                    1f4419f0-2ba8-42ae-9eb1-7f5d5113a4d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-rmrqx                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m12s
	  kube-system                 etcd-test-preload-159739                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m25s
	  kube-system                 kube-apiserver-test-preload-159739             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-test-preload-159739    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-m8jxc                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-scheduler-test-preload-159739             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 2m8s               kube-proxy       
	  Normal  Starting                 2m25s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m25s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m25s              kubelet          Node test-preload-159739 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s              kubelet          Node test-preload-159739 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s              kubelet          Node test-preload-159739 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m15s              kubelet          Node test-preload-159739 status is now: NodeReady
	  Normal  RegisteredNode           2m12s              node-controller  Node test-preload-159739 event: Registered Node test-preload-159739 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-159739 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-159739 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-159739 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node test-preload-159739 event: Registered Node test-preload-159739 in Controller
	
	
	==> dmesg <==
	[Apr 3 19:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050932] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036745] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr 3 19:10] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.883548] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.531574] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.888933] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.064184] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050299] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.176925] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.119995] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.246767] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[ +12.693373] systemd-fstab-generator[988]: Ignoring "noauto" option for root device
	[  +0.058918] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.627896] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +6.148465] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.924142] systemd-fstab-generator[1753]: Ignoring "noauto" option for root device
	[  +5.035560] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [9dac06ee43dd413b7ed6b6caa7d845a051ef105e521f1a86eaa6ac5dcd96b7fa] <==
	{"level":"info","ts":"2025-04-03T19:10:31.538Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"3276445ff8d31e34","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-03T19:10:31.539Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-03T19:10:31.539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 switched to configuration voters=(3636168928135421492)"}
	{"level":"info","ts":"2025-04-03T19:10:31.540Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","added-peer-id":"3276445ff8d31e34","added-peer-peer-urls":["https://192.168.39.100:2380"]}
	{"level":"info","ts":"2025-04-03T19:10:31.541Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-03T19:10:31.541Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-03T19:10:31.568Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-03T19:10:31.569Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3276445ff8d31e34","initial-advertise-peer-urls":["https://192.168.39.100:2380"],"listen-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.100:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-03T19:10:31.569Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-03T19:10:31.569Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2025-04-03T19:10:31.569Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2025-04-03T19:10:33.009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-03T19:10:33.009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-03T19:10:33.009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgPreVoteResp from 3276445ff8d31e34 at term 2"}
	{"level":"info","ts":"2025-04-03T19:10:33.009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became candidate at term 3"}
	{"level":"info","ts":"2025-04-03T19:10:33.009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgVoteResp from 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2025-04-03T19:10:33.009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became leader at term 3"}
	{"level":"info","ts":"2025-04-03T19:10:33.009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3276445ff8d31e34 elected leader 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2025-04-03T19:10:33.017Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"3276445ff8d31e34","local-member-attributes":"{Name:test-preload-159739 ClientURLs:[https://192.168.39.100:2379]}","request-path":"/0/members/3276445ff8d31e34/attributes","cluster-id":"6cf58294dcaef1c8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-03T19:10:33.017Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-03T19:10:33.017Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-03T19:10:33.019Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2025-04-03T19:10:33.019Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-03T19:10:33.019Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-03T19:10:33.019Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:10:53 up 0 min,  0 users,  load average: 0.31, 0.09, 0.03
	Linux test-preload-159739 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3ea4b09c95187874ae33331f9018aee2ffdcd8fa85fd20918796069a1d300d67] <==
	I0403 19:10:35.336836       1 establishing_controller.go:76] Starting EstablishingController
	I0403 19:10:35.337249       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0403 19:10:35.337335       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0403 19:10:35.337530       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0403 19:10:35.337744       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0403 19:10:35.349404       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0403 19:10:35.394130       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0403 19:10:35.454178       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0403 19:10:35.455825       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0403 19:10:35.459502       1 cache.go:39] Caches are synced for autoregister controller
	I0403 19:10:35.460326       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0403 19:10:35.461966       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0403 19:10:35.467876       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0403 19:10:35.484630       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0403 19:10:35.495027       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0403 19:10:35.946408       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0403 19:10:36.262675       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0403 19:10:36.974688       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0403 19:10:37.146899       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0403 19:10:37.157416       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0403 19:10:37.197637       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0403 19:10:37.216083       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0403 19:10:37.221334       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0403 19:10:47.753888       1 controller.go:611] quota admission added evaluator for: endpoints
	I0403 19:10:47.905494       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f5c4ad08be99306c61f183438981abda4ed4f81f5e9834d0cb6c6e575993a7ec] <==
	I0403 19:10:47.733414       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0403 19:10:47.735551       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0403 19:10:47.735656       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0403 19:10:47.735722       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0403 19:10:47.740482       1 shared_informer.go:262] Caches are synced for ephemeral
	I0403 19:10:47.745951       1 shared_informer.go:262] Caches are synced for endpoint
	I0403 19:10:47.755488       1 shared_informer.go:262] Caches are synced for deployment
	I0403 19:10:47.760870       1 shared_informer.go:262] Caches are synced for PV protection
	I0403 19:10:47.764368       1 shared_informer.go:262] Caches are synced for expand
	I0403 19:10:47.766685       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0403 19:10:47.768875       1 shared_informer.go:262] Caches are synced for attach detach
	I0403 19:10:47.768996       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0403 19:10:47.770379       1 shared_informer.go:262] Caches are synced for stateful set
	I0403 19:10:47.771576       1 shared_informer.go:262] Caches are synced for crt configmap
	I0403 19:10:47.774841       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0403 19:10:47.778115       1 shared_informer.go:262] Caches are synced for disruption
	I0403 19:10:47.778132       1 disruption.go:371] Sending events to api server.
	I0403 19:10:47.781390       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0403 19:10:47.863377       1 shared_informer.go:262] Caches are synced for namespace
	I0403 19:10:47.864815       1 shared_informer.go:262] Caches are synced for service account
	I0403 19:10:47.894421       1 shared_informer.go:262] Caches are synced for resource quota
	I0403 19:10:47.939209       1 shared_informer.go:262] Caches are synced for resource quota
	I0403 19:10:48.398256       1 shared_informer.go:262] Caches are synced for garbage collector
	I0403 19:10:48.409565       1 shared_informer.go:262] Caches are synced for garbage collector
	I0403 19:10:48.409594       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [23b432c291889602d2621be9d44b15f13d6b6f7d563a39a7a1d0dcb5df5d1e32] <==
	I0403 19:10:36.934388       1 node.go:163] Successfully retrieved node IP: 192.168.39.100
	I0403 19:10:36.934736       1 server_others.go:138] "Detected node IP" address="192.168.39.100"
	I0403 19:10:36.934895       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0403 19:10:36.960329       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0403 19:10:36.960397       1 server_others.go:206] "Using iptables Proxier"
	I0403 19:10:36.961035       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0403 19:10:36.961573       1 server.go:661] "Version info" version="v1.24.4"
	I0403 19:10:36.961673       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0403 19:10:36.964437       1 config.go:317] "Starting service config controller"
	I0403 19:10:36.964462       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0403 19:10:36.964529       1 config.go:226] "Starting endpoint slice config controller"
	I0403 19:10:36.964534       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0403 19:10:36.968402       1 config.go:444] "Starting node config controller"
	I0403 19:10:36.968516       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0403 19:10:37.065469       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0403 19:10:37.065553       1 shared_informer.go:262] Caches are synced for service config
	I0403 19:10:37.069354       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [6a07db1d245c3406708ca093bb5f04db756d624b17f4e2249cb676a90cd131e4] <==
	I0403 19:10:31.985748       1 serving.go:348] Generated self-signed cert in-memory
	W0403 19:10:35.358479       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0403 19:10:35.359578       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0403 19:10:35.359650       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0403 19:10:35.359678       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0403 19:10:35.408122       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0403 19:10:35.408239       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0403 19:10:35.412704       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0403 19:10:35.413055       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0403 19:10:35.413162       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0403 19:10:35.414056       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0403 19:10:35.514002       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.440306    1124 topology_manager.go:200] "Topology Admit Handler"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: E0403 19:10:35.441536    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-rmrqx" podUID=e4a5aca5-34eb-47d9-b741-87120e3b7cdc
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.469431    1124 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-159739"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.469711    1124 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-159739"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.472910    1124 setters.go:532] "Node became not ready" node="test-preload-159739" condition={Type:Ready Status:False LastHeartbeatTime:2025-04-03 19:10:35.472869215 +0000 UTC m=+5.140316941 LastTransitionTime:2025-04-03 19:10:35.472869215 +0000 UTC m=+5.140316941 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.497095    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/895a2584-30e0-498f-8e86-28309568569e-kube-proxy\") pod \"kube-proxy-m8jxc\" (UID: \"895a2584-30e0-498f-8e86-28309568569e\") " pod="kube-system/kube-proxy-m8jxc"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.497156    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/895a2584-30e0-498f-8e86-28309568569e-xtables-lock\") pod \"kube-proxy-m8jxc\" (UID: \"895a2584-30e0-498f-8e86-28309568569e\") " pod="kube-system/kube-proxy-m8jxc"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.497185    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26t62\" (UniqueName: \"kubernetes.io/projected/895a2584-30e0-498f-8e86-28309568569e-kube-api-access-26t62\") pod \"kube-proxy-m8jxc\" (UID: \"895a2584-30e0-498f-8e86-28309568569e\") " pod="kube-system/kube-proxy-m8jxc"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.497207    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4a5aca5-34eb-47d9-b741-87120e3b7cdc-config-volume\") pod \"coredns-6d4b75cb6d-rmrqx\" (UID: \"e4a5aca5-34eb-47d9-b741-87120e3b7cdc\") " pod="kube-system/coredns-6d4b75cb6d-rmrqx"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.497235    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jbj4\" (UniqueName: \"kubernetes.io/projected/e4a5aca5-34eb-47d9-b741-87120e3b7cdc-kube-api-access-2jbj4\") pod \"coredns-6d4b75cb6d-rmrqx\" (UID: \"e4a5aca5-34eb-47d9-b741-87120e3b7cdc\") " pod="kube-system/coredns-6d4b75cb6d-rmrqx"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.497264    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld578\" (UniqueName: \"kubernetes.io/projected/83a99ecf-5581-479d-acdf-f495a52e0b43-kube-api-access-ld578\") pod \"storage-provisioner\" (UID: \"83a99ecf-5581-479d-acdf-f495a52e0b43\") " pod="kube-system/storage-provisioner"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.497286    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/895a2584-30e0-498f-8e86-28309568569e-lib-modules\") pod \"kube-proxy-m8jxc\" (UID: \"895a2584-30e0-498f-8e86-28309568569e\") " pod="kube-system/kube-proxy-m8jxc"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.497307    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/83a99ecf-5581-479d-acdf-f495a52e0b43-tmp\") pod \"storage-provisioner\" (UID: \"83a99ecf-5581-479d-acdf-f495a52e0b43\") " pod="kube-system/storage-provisioner"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: I0403 19:10:35.497324    1124 reconciler.go:159] "Reconciler: start to sync state"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: E0403 19:10:35.502868    1124 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: E0403 19:10:35.604249    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 03 19:10:35 test-preload-159739 kubelet[1124]: E0403 19:10:35.604405    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e4a5aca5-34eb-47d9-b741-87120e3b7cdc-config-volume podName:e4a5aca5-34eb-47d9-b741-87120e3b7cdc nodeName:}" failed. No retries permitted until 2025-04-03 19:10:36.104344683 +0000 UTC m=+5.771792449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e4a5aca5-34eb-47d9-b741-87120e3b7cdc-config-volume") pod "coredns-6d4b75cb6d-rmrqx" (UID: "e4a5aca5-34eb-47d9-b741-87120e3b7cdc") : object "kube-system"/"coredns" not registered
	Apr 03 19:10:36 test-preload-159739 kubelet[1124]: E0403 19:10:36.107169    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 03 19:10:36 test-preload-159739 kubelet[1124]: E0403 19:10:36.107280    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e4a5aca5-34eb-47d9-b741-87120e3b7cdc-config-volume podName:e4a5aca5-34eb-47d9-b741-87120e3b7cdc nodeName:}" failed. No retries permitted until 2025-04-03 19:10:37.107265097 +0000 UTC m=+6.774712838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e4a5aca5-34eb-47d9-b741-87120e3b7cdc-config-volume") pod "coredns-6d4b75cb6d-rmrqx" (UID: "e4a5aca5-34eb-47d9-b741-87120e3b7cdc") : object "kube-system"/"coredns" not registered
	Apr 03 19:10:37 test-preload-159739 kubelet[1124]: E0403 19:10:37.114528    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 03 19:10:37 test-preload-159739 kubelet[1124]: E0403 19:10:37.114631    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e4a5aca5-34eb-47d9-b741-87120e3b7cdc-config-volume podName:e4a5aca5-34eb-47d9-b741-87120e3b7cdc nodeName:}" failed. No retries permitted until 2025-04-03 19:10:39.114589381 +0000 UTC m=+8.782037119 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e4a5aca5-34eb-47d9-b741-87120e3b7cdc-config-volume") pod "coredns-6d4b75cb6d-rmrqx" (UID: "e4a5aca5-34eb-47d9-b741-87120e3b7cdc") : object "kube-system"/"coredns" not registered
	Apr 03 19:10:37 test-preload-159739 kubelet[1124]: E0403 19:10:37.543041    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-rmrqx" podUID=e4a5aca5-34eb-47d9-b741-87120e3b7cdc
	Apr 03 19:10:39 test-preload-159739 kubelet[1124]: E0403 19:10:39.129719    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 03 19:10:39 test-preload-159739 kubelet[1124]: E0403 19:10:39.129799    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e4a5aca5-34eb-47d9-b741-87120e3b7cdc-config-volume podName:e4a5aca5-34eb-47d9-b741-87120e3b7cdc nodeName:}" failed. No retries permitted until 2025-04-03 19:10:43.129783904 +0000 UTC m=+12.797231642 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e4a5aca5-34eb-47d9-b741-87120e3b7cdc-config-volume") pod "coredns-6d4b75cb6d-rmrqx" (UID: "e4a5aca5-34eb-47d9-b741-87120e3b7cdc") : object "kube-system"/"coredns" not registered
	Apr 03 19:10:39 test-preload-159739 kubelet[1124]: E0403 19:10:39.542943    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-rmrqx" podUID=e4a5aca5-34eb-47d9-b741-87120e3b7cdc
	
	
	==> storage-provisioner [92d37d3584b72cfe3c051d235ccf3a9d3c6a511002238c8574bc145a53b9d0c1] <==
	I0403 19:10:36.565466       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-159739 -n test-preload-159739
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-159739 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-159739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-159739
--- FAIL: TestPreload (213.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (407.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-523797 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-523797 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m29.115670372s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-523797] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-523797" primary control-plane node in "kubernetes-upgrade-523797" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 19:12:51.319924   54806 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:12:51.320040   54806 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:12:51.320048   54806 out.go:358] Setting ErrFile to fd 2...
	I0403 19:12:51.320054   54806 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:12:51.320321   54806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:12:51.321073   54806 out.go:352] Setting JSON to false
	I0403 19:12:51.322262   54806 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6916,"bootTime":1743700655,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:12:51.322350   54806 start.go:139] virtualization: kvm guest
	I0403 19:12:51.324208   54806 out.go:177] * [kubernetes-upgrade-523797] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:12:51.325439   54806 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:12:51.325440   54806 notify.go:220] Checking for updates...
	I0403 19:12:51.327355   54806 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:12:51.328504   54806 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:12:51.330230   54806 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:12:51.331366   54806 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:12:51.333147   54806 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:12:51.334471   54806 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:12:51.371585   54806 out.go:177] * Using the kvm2 driver based on user configuration
	I0403 19:12:51.372715   54806 start.go:297] selected driver: kvm2
	I0403 19:12:51.372731   54806 start.go:901] validating driver "kvm2" against <nil>
	I0403 19:12:51.372749   54806 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:12:51.373420   54806 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:12:51.373488   54806 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 19:12:51.393418   54806 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0403 19:12:51.393461   54806 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0403 19:12:51.393682   54806 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0403 19:12:51.393715   54806 cni.go:84] Creating CNI manager for ""
	I0403 19:12:51.393753   54806 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:12:51.393761   54806 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0403 19:12:51.393818   54806 start.go:340] cluster config:
	{Name:kubernetes-upgrade-523797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-523797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:12:51.393936   54806 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:12:51.395395   54806 out.go:177] * Starting "kubernetes-upgrade-523797" primary control-plane node in "kubernetes-upgrade-523797" cluster
	I0403 19:12:51.396362   54806 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0403 19:12:51.396401   54806 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0403 19:12:51.396409   54806 cache.go:56] Caching tarball of preloaded images
	I0403 19:12:51.396505   54806 preload.go:172] Found /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0403 19:12:51.396519   54806 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0403 19:12:51.396855   54806 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/config.json ...
	I0403 19:12:51.396880   54806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/config.json: {Name:mk91e978dbaf6b7977756dff05eea3b5c5f3232d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:12:51.397031   54806 start.go:360] acquireMachinesLock for kubernetes-upgrade-523797: {Name:mk8972215f0ab94ca7966bf5adf18262e19bccd0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0403 19:12:51.397068   54806 start.go:364] duration metric: took 17.491µs to acquireMachinesLock for "kubernetes-upgrade-523797"
	I0403 19:12:51.397092   54806 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-523797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-523797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:12:51.397161   54806 start.go:125] createHost starting for "" (driver="kvm2")
	I0403 19:12:51.398631   54806 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0403 19:12:51.398762   54806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:12:51.398799   54806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:12:51.414689   54806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37317
	I0403 19:12:51.415251   54806 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:12:51.415796   54806 main.go:141] libmachine: Using API Version  1
	I0403 19:12:51.415817   54806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:12:51.416170   54806 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:12:51.416348   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetMachineName
	I0403 19:12:51.416483   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:12:51.416633   54806 start.go:159] libmachine.API.Create for "kubernetes-upgrade-523797" (driver="kvm2")
	I0403 19:12:51.416664   54806 client.go:168] LocalClient.Create starting
	I0403 19:12:51.416695   54806 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem
	I0403 19:12:51.416738   54806 main.go:141] libmachine: Decoding PEM data...
	I0403 19:12:51.416761   54806 main.go:141] libmachine: Parsing certificate...
	I0403 19:12:51.416843   54806 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem
	I0403 19:12:51.416870   54806 main.go:141] libmachine: Decoding PEM data...
	I0403 19:12:51.416887   54806 main.go:141] libmachine: Parsing certificate...
	I0403 19:12:51.416910   54806 main.go:141] libmachine: Running pre-create checks...
	I0403 19:12:51.416929   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .PreCreateCheck
	I0403 19:12:51.417213   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetConfigRaw
	I0403 19:12:51.417595   54806 main.go:141] libmachine: Creating machine...
	I0403 19:12:51.417611   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .Create
	I0403 19:12:51.417761   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) creating KVM machine...
	I0403 19:12:51.417780   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) creating network...
	I0403 19:12:51.419134   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found existing default KVM network
	I0403 19:12:51.419896   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:51.419749   54864 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112da0}
	I0403 19:12:51.419964   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | created network xml: 
	I0403 19:12:51.419990   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | <network>
	I0403 19:12:51.420003   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG |   <name>mk-kubernetes-upgrade-523797</name>
	I0403 19:12:51.420017   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG |   <dns enable='no'/>
	I0403 19:12:51.420031   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG |   
	I0403 19:12:51.420040   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0403 19:12:51.420052   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG |     <dhcp>
	I0403 19:12:51.420064   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0403 19:12:51.420077   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG |     </dhcp>
	I0403 19:12:51.420088   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG |   </ip>
	I0403 19:12:51.420101   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG |   
	I0403 19:12:51.420112   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | </network>
	I0403 19:12:51.420122   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | 
	I0403 19:12:51.424660   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | trying to create private KVM network mk-kubernetes-upgrade-523797 192.168.39.0/24...
	I0403 19:12:51.493865   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | private KVM network mk-kubernetes-upgrade-523797 192.168.39.0/24 created
	I0403 19:12:51.493992   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) setting up store path in /home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797 ...
	I0403 19:12:51.494017   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:51.493857   54864 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:12:51.494034   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) building disk image from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0403 19:12:51.494064   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Downloading /home/jenkins/minikube-integration/20591-14371/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0403 19:12:51.740017   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:51.739851   54864 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/id_rsa...
	I0403 19:12:52.058045   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:52.057928   54864 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/kubernetes-upgrade-523797.rawdisk...
	I0403 19:12:52.058072   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | Writing magic tar header
	I0403 19:12:52.058088   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | Writing SSH key tar header
	I0403 19:12:52.058101   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:52.058057   54864 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797 ...
	I0403 19:12:52.058176   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797
	I0403 19:12:52.058195   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797 (perms=drwx------)
	I0403 19:12:52.058202   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines
	I0403 19:12:52.058210   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines (perms=drwxr-xr-x)
	I0403 19:12:52.058221   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube (perms=drwxr-xr-x)
	I0403 19:12:52.058230   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) setting executable bit set on /home/jenkins/minikube-integration/20591-14371 (perms=drwxrwxr-x)
	I0403 19:12:52.058240   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0403 19:12:52.058245   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0403 19:12:52.058260   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:12:52.058271   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) creating domain...
	I0403 19:12:52.058289   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371
	I0403 19:12:52.058301   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0403 19:12:52.058322   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | checking permissions on dir: /home/jenkins
	I0403 19:12:52.058330   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | checking permissions on dir: /home
	I0403 19:12:52.058359   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | skipping /home - not owner
	I0403 19:12:52.059390   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) define libvirt domain using xml: 
	I0403 19:12:52.059412   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) <domain type='kvm'>
	I0403 19:12:52.059422   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   <name>kubernetes-upgrade-523797</name>
	I0403 19:12:52.059429   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   <memory unit='MiB'>2200</memory>
	I0403 19:12:52.059434   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   <vcpu>2</vcpu>
	I0403 19:12:52.059440   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   <features>
	I0403 19:12:52.059445   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <acpi/>
	I0403 19:12:52.059455   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <apic/>
	I0403 19:12:52.059465   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <pae/>
	I0403 19:12:52.059472   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     
	I0403 19:12:52.059480   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   </features>
	I0403 19:12:52.059487   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   <cpu mode='host-passthrough'>
	I0403 19:12:52.059498   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   
	I0403 19:12:52.059509   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   </cpu>
	I0403 19:12:52.059518   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   <os>
	I0403 19:12:52.059530   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <type>hvm</type>
	I0403 19:12:52.059611   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <boot dev='cdrom'/>
	I0403 19:12:52.059637   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <boot dev='hd'/>
	I0403 19:12:52.059652   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <bootmenu enable='no'/>
	I0403 19:12:52.059662   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   </os>
	I0403 19:12:52.059674   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   <devices>
	I0403 19:12:52.059685   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <disk type='file' device='cdrom'>
	I0403 19:12:52.059703   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/boot2docker.iso'/>
	I0403 19:12:52.059718   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <target dev='hdc' bus='scsi'/>
	I0403 19:12:52.059730   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <readonly/>
	I0403 19:12:52.059737   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     </disk>
	I0403 19:12:52.059749   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <disk type='file' device='disk'>
	I0403 19:12:52.059761   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0403 19:12:52.059778   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/kubernetes-upgrade-523797.rawdisk'/>
	I0403 19:12:52.059797   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <target dev='hda' bus='virtio'/>
	I0403 19:12:52.059809   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     </disk>
	I0403 19:12:52.059819   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <interface type='network'>
	I0403 19:12:52.059831   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <source network='mk-kubernetes-upgrade-523797'/>
	I0403 19:12:52.059841   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <model type='virtio'/>
	I0403 19:12:52.059852   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     </interface>
	I0403 19:12:52.059867   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <interface type='network'>
	I0403 19:12:52.059879   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <source network='default'/>
	I0403 19:12:52.059890   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <model type='virtio'/>
	I0403 19:12:52.059901   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     </interface>
	I0403 19:12:52.059910   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <serial type='pty'>
	I0403 19:12:52.059921   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <target port='0'/>
	I0403 19:12:52.059931   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     </serial>
	I0403 19:12:52.059945   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <console type='pty'>
	I0403 19:12:52.059963   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <target type='serial' port='0'/>
	I0403 19:12:52.059972   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     </console>
	I0403 19:12:52.059979   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     <rng model='virtio'>
	I0403 19:12:52.059999   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)       <backend model='random'>/dev/random</backend>
	I0403 19:12:52.060016   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     </rng>
	I0403 19:12:52.060035   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     
	I0403 19:12:52.060046   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)     
	I0403 19:12:52.060057   54806 main.go:141] libmachine: (kubernetes-upgrade-523797)   </devices>
	I0403 19:12:52.060068   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) </domain>
	I0403 19:12:52.060080   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) 
	I0403 19:12:52.064284   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:a4:0c:1e in network default
	I0403 19:12:52.064835   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) starting domain...
	I0403 19:12:52.064852   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:12:52.064857   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) ensuring networks are active...
	I0403 19:12:52.065452   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Ensuring network default is active
	I0403 19:12:52.065739   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Ensuring network mk-kubernetes-upgrade-523797 is active
	I0403 19:12:52.066243   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) getting domain XML...
	I0403 19:12:52.067016   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) creating domain...
	I0403 19:12:53.347799   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) waiting for IP...
	I0403 19:12:53.348680   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:12:53.349000   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:12:53.349065   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:53.349006   54864 retry.go:31] will retry after 261.164908ms: waiting for domain to come up
	I0403 19:12:53.611370   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:12:53.611863   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:12:53.611890   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:53.611836   54864 retry.go:31] will retry after 301.869092ms: waiting for domain to come up
	I0403 19:12:53.915393   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:12:53.915820   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:12:53.915840   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:53.915803   54864 retry.go:31] will retry after 436.286885ms: waiting for domain to come up
	I0403 19:12:54.353402   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:12:54.353802   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:12:54.353850   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:54.353780   54864 retry.go:31] will retry after 377.608803ms: waiting for domain to come up
	I0403 19:12:54.733046   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:12:54.733461   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:12:54.733552   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:54.733449   54864 retry.go:31] will retry after 529.865419ms: waiting for domain to come up
	I0403 19:12:55.264978   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:12:55.265436   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:12:55.265462   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:55.265410   54864 retry.go:31] will retry after 862.710929ms: waiting for domain to come up
	I0403 19:12:56.129276   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:12:56.129620   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:12:56.129643   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:56.129598   54864 retry.go:31] will retry after 936.203389ms: waiting for domain to come up
	I0403 19:12:57.067649   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:12:57.068038   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:12:57.068060   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:57.068017   54864 retry.go:31] will retry after 1.103344369s: waiting for domain to come up
	I0403 19:12:58.172466   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:12:58.172943   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:12:58.172975   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:58.172924   54864 retry.go:31] will retry after 1.664933887s: waiting for domain to come up
	I0403 19:12:59.839573   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:12:59.840005   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:12:59.840048   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:12:59.839982   54864 retry.go:31] will retry after 1.538430944s: waiting for domain to come up
	I0403 19:13:01.380672   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:01.381150   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:13:01.381195   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:13:01.381104   54864 retry.go:31] will retry after 2.39605841s: waiting for domain to come up
	I0403 19:13:03.779957   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:03.780397   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:13:03.780424   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:13:03.780359   54864 retry.go:31] will retry after 2.84985437s: waiting for domain to come up
	I0403 19:13:06.632569   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:06.632958   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:13:06.632982   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:13:06.632937   54864 retry.go:31] will retry after 3.018337166s: waiting for domain to come up
	I0403 19:13:09.654908   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:09.655341   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find current IP address of domain kubernetes-upgrade-523797 in network mk-kubernetes-upgrade-523797
	I0403 19:13:09.655370   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | I0403 19:13:09.655303   54864 retry.go:31] will retry after 4.138579107s: waiting for domain to come up
	I0403 19:13:13.795096   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:13.795599   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has current primary IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:13.795628   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) found domain IP: 192.168.39.159
	I0403 19:13:13.795641   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) reserving static IP address...
	I0403 19:13:13.796435   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-523797", mac: "52:54:00:47:b5:19", ip: "192.168.39.159"} in network mk-kubernetes-upgrade-523797
	I0403 19:13:13.870530   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) reserved static IP address 192.168.39.159 for domain kubernetes-upgrade-523797
	I0403 19:13:13.870559   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | Getting to WaitForSSH function...
	I0403 19:13:13.870568   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) waiting for SSH...
	I0403 19:13:13.873135   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:13.873560   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:minikube Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:13.873586   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:13.873834   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | Using SSH client type: external
	I0403 19:13:13.873862   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | Using SSH private key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/id_rsa (-rw-------)
	I0403 19:13:13.873902   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0403 19:13:13.873916   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | About to run SSH command:
	I0403 19:13:13.873939   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | exit 0
	I0403 19:13:14.006776   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | SSH cmd err, output: <nil>: 
	I0403 19:13:14.007080   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) KVM machine creation complete
	I0403 19:13:14.007381   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetConfigRaw
	I0403 19:13:14.007962   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:13:14.008132   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:13:14.008264   54806 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0403 19:13:14.008279   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetState
	I0403 19:13:14.009495   54806 main.go:141] libmachine: Detecting operating system of created instance...
	I0403 19:13:14.009510   54806 main.go:141] libmachine: Waiting for SSH to be available...
	I0403 19:13:14.009517   54806 main.go:141] libmachine: Getting to WaitForSSH function...
	I0403 19:13:14.009525   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:13:14.012100   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.012425   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:14.012452   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.012589   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:13:14.012783   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.012919   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.013026   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:13:14.013239   54806 main.go:141] libmachine: Using SSH client type: native
	I0403 19:13:14.013458   54806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0403 19:13:14.013469   54806 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0403 19:13:14.126062   54806 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:13:14.126085   54806 main.go:141] libmachine: Detecting the provisioner...
	I0403 19:13:14.126095   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:13:14.128798   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.129120   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:14.129142   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.129295   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:13:14.129478   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.129645   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.129767   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:13:14.129904   54806 main.go:141] libmachine: Using SSH client type: native
	I0403 19:13:14.130110   54806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0403 19:13:14.130121   54806 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0403 19:13:14.243241   54806 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0403 19:13:14.243321   54806 main.go:141] libmachine: found compatible host: buildroot
	I0403 19:13:14.243330   54806 main.go:141] libmachine: Provisioning with buildroot...
	I0403 19:13:14.243338   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetMachineName
	I0403 19:13:14.243577   54806 buildroot.go:166] provisioning hostname "kubernetes-upgrade-523797"
	I0403 19:13:14.243601   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetMachineName
	I0403 19:13:14.243794   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:13:14.246428   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.246780   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:14.246814   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.246898   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:13:14.247069   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.247189   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.247288   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:13:14.247432   54806 main.go:141] libmachine: Using SSH client type: native
	I0403 19:13:14.247652   54806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0403 19:13:14.247675   54806 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-523797 && echo "kubernetes-upgrade-523797" | sudo tee /etc/hostname
	I0403 19:13:14.372297   54806 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-523797
	
	I0403 19:13:14.372340   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:13:14.375173   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.375533   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:14.375560   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.375676   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:13:14.375882   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.376123   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.376250   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:13:14.376455   54806 main.go:141] libmachine: Using SSH client type: native
	I0403 19:13:14.376665   54806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0403 19:13:14.376697   54806 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-523797' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-523797/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-523797' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0403 19:13:14.498747   54806 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:13:14.498778   54806 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20591-14371/.minikube CaCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20591-14371/.minikube}
	I0403 19:13:14.498808   54806 buildroot.go:174] setting up certificates
	I0403 19:13:14.498817   54806 provision.go:84] configureAuth start
	I0403 19:13:14.498846   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetMachineName
	I0403 19:13:14.499118   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetIP
	I0403 19:13:14.501656   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.501984   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:14.502010   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.502152   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:13:14.504407   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.504710   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:14.504736   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.504874   54806 provision.go:143] copyHostCerts
	I0403 19:13:14.504935   54806 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem, removing ...
	I0403 19:13:14.504957   54806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem
	I0403 19:13:14.505024   54806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem (1675 bytes)
	I0403 19:13:14.505128   54806 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem, removing ...
	I0403 19:13:14.505137   54806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem
	I0403 19:13:14.505160   54806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem (1082 bytes)
	I0403 19:13:14.505254   54806 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem, removing ...
	I0403 19:13:14.505263   54806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem
	I0403 19:13:14.505301   54806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem (1123 bytes)
	I0403 19:13:14.505381   54806 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-523797 san=[127.0.0.1 192.168.39.159 kubernetes-upgrade-523797 localhost minikube]
	I0403 19:13:14.581113   54806 provision.go:177] copyRemoteCerts
	I0403 19:13:14.581183   54806 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0403 19:13:14.581213   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:13:14.583527   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.583805   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:14.583834   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.583964   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:13:14.584138   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.584297   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:13:14.584421   54806 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/id_rsa Username:docker}
	I0403 19:13:14.668671   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0403 19:13:14.691057   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0403 19:13:14.712309   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0403 19:13:14.733246   54806 provision.go:87] duration metric: took 234.399967ms to configureAuth
	I0403 19:13:14.733268   54806 buildroot.go:189] setting minikube options for container-runtime
	I0403 19:13:14.733420   54806 config.go:182] Loaded profile config "kubernetes-upgrade-523797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0403 19:13:14.733481   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:13:14.736121   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.736497   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:14.736528   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.736707   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:13:14.736869   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.737002   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.737106   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:13:14.737314   54806 main.go:141] libmachine: Using SSH client type: native
	I0403 19:13:14.737532   54806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0403 19:13:14.737555   54806 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0403 19:13:14.957196   54806 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0403 19:13:14.957233   54806 main.go:141] libmachine: Checking connection to Docker...
	I0403 19:13:14.957244   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetURL
	I0403 19:13:14.958486   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | using libvirt version 6000000
	I0403 19:13:14.960875   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.961286   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:14.961317   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.961462   54806 main.go:141] libmachine: Docker is up and running!
	I0403 19:13:14.961480   54806 main.go:141] libmachine: Reticulating splines...
	I0403 19:13:14.961487   54806 client.go:171] duration metric: took 23.544812942s to LocalClient.Create
	I0403 19:13:14.961510   54806 start.go:167] duration metric: took 23.544879465s to libmachine.API.Create "kubernetes-upgrade-523797"
	I0403 19:13:14.961520   54806 start.go:293] postStartSetup for "kubernetes-upgrade-523797" (driver="kvm2")
	I0403 19:13:14.961542   54806 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0403 19:13:14.961558   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:13:14.961756   54806 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0403 19:13:14.961776   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:13:14.963963   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.964277   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:14.964308   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:14.964442   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:13:14.964604   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:14.964733   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:13:14.964823   54806 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/id_rsa Username:docker}
	I0403 19:13:15.048461   54806 ssh_runner.go:195] Run: cat /etc/os-release
	I0403 19:13:15.052263   54806 info.go:137] Remote host: Buildroot 2023.02.9
	I0403 19:13:15.052285   54806 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/addons for local assets ...
	I0403 19:13:15.052362   54806 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/files for local assets ...
	I0403 19:13:15.052455   54806 filesync.go:149] local asset: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem -> 215522.pem in /etc/ssl/certs
	I0403 19:13:15.052573   54806 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0403 19:13:15.061014   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:13:15.082599   54806 start.go:296] duration metric: took 121.065624ms for postStartSetup
	I0403 19:13:15.082655   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetConfigRaw
	I0403 19:13:15.083336   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetIP
	I0403 19:13:15.085765   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:15.086039   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:15.086066   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:15.086341   54806 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/config.json ...
	I0403 19:13:15.086512   54806 start.go:128] duration metric: took 23.689342526s to createHost
	I0403 19:13:15.086532   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:13:15.088793   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:15.089098   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:15.089134   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:15.089244   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:13:15.089423   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:15.089574   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:15.089701   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:13:15.089839   54806 main.go:141] libmachine: Using SSH client type: native
	I0403 19:13:15.090065   54806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I0403 19:13:15.090077   54806 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0403 19:13:15.207255   54806 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743707595.181634593
	
	I0403 19:13:15.207278   54806 fix.go:216] guest clock: 1743707595.181634593
	I0403 19:13:15.207285   54806 fix.go:229] Guest: 2025-04-03 19:13:15.181634593 +0000 UTC Remote: 2025-04-03 19:13:15.086522124 +0000 UTC m=+23.808792614 (delta=95.112469ms)
	I0403 19:13:15.207312   54806 fix.go:200] guest clock delta is within tolerance: 95.112469ms
	I0403 19:13:15.207326   54806 start.go:83] releasing machines lock for "kubernetes-upgrade-523797", held for 23.810237506s
	I0403 19:13:15.207356   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:13:15.207622   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetIP
	I0403 19:13:15.210245   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:15.210576   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:15.210602   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:15.210786   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:13:15.211196   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:13:15.211373   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:13:15.211487   54806 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0403 19:13:15.211527   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:13:15.211572   54806 ssh_runner.go:195] Run: cat /version.json
	I0403 19:13:15.211597   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:13:15.214203   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:15.214424   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:15.214575   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:15.214609   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:15.214730   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:15.214744   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:13:15.214749   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:15.214907   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:15.214974   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:13:15.215083   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:13:15.215149   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:13:15.215224   54806 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/id_rsa Username:docker}
	I0403 19:13:15.215288   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:13:15.215425   54806 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/id_rsa Username:docker}
	I0403 19:13:15.338039   54806 ssh_runner.go:195] Run: systemctl --version
	I0403 19:13:15.344118   54806 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0403 19:13:15.507090   54806 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0403 19:13:15.512998   54806 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0403 19:13:15.513065   54806 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0403 19:13:15.529836   54806 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0403 19:13:15.529863   54806 start.go:495] detecting cgroup driver to use...
	I0403 19:13:15.529953   54806 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0403 19:13:15.546796   54806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0403 19:13:15.566345   54806 docker.go:217] disabling cri-docker service (if available) ...
	I0403 19:13:15.566392   54806 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0403 19:13:15.581940   54806 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0403 19:13:15.595069   54806 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0403 19:13:15.705779   54806 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0403 19:13:15.863506   54806 docker.go:233] disabling docker service ...
	I0403 19:13:15.863568   54806 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0403 19:13:15.877619   54806 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0403 19:13:15.890837   54806 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0403 19:13:16.009501   54806 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0403 19:13:16.134497   54806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0403 19:13:16.147798   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0403 19:13:16.164943   54806 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0403 19:13:16.165062   54806 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:13:16.174427   54806 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0403 19:13:16.174491   54806 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:13:16.184004   54806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:13:16.193490   54806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:13:16.203118   54806 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0403 19:13:16.213311   54806 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0403 19:13:16.222363   54806 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0403 19:13:16.222418   54806 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0403 19:13:16.235305   54806 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0403 19:13:16.244206   54806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:13:16.375566   54806 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0403 19:13:16.480710   54806 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0403 19:13:16.480770   54806 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0403 19:13:16.485106   54806 start.go:563] Will wait 60s for crictl version
	I0403 19:13:16.485160   54806 ssh_runner.go:195] Run: which crictl
	I0403 19:13:16.488620   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0403 19:13:16.533045   54806 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0403 19:13:16.533118   54806 ssh_runner.go:195] Run: crio --version
	I0403 19:13:16.562858   54806 ssh_runner.go:195] Run: crio --version
	I0403 19:13:16.590262   54806 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0403 19:13:16.591314   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetIP
	I0403 19:13:16.594478   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:16.594871   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:13:16.594909   54806 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:13:16.595214   54806 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0403 19:13:16.599022   54806 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:13:16.610224   54806 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-523797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-523797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0403 19:13:16.610316   54806 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0403 19:13:16.610361   54806 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:13:16.641002   54806 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0403 19:13:16.641075   54806 ssh_runner.go:195] Run: which lz4
	I0403 19:13:16.644843   54806 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0403 19:13:16.648571   54806 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0403 19:13:16.648600   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0403 19:13:18.141507   54806 crio.go:462] duration metric: took 1.496706719s to copy over tarball
	I0403 19:13:18.141578   54806 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0403 19:13:20.732983   54806 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.591377615s)
	I0403 19:13:20.733016   54806 crio.go:469] duration metric: took 2.591482355s to extract the tarball
	I0403 19:13:20.733024   54806 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0403 19:13:20.774623   54806 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:13:20.818172   54806 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0403 19:13:20.818196   54806 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0403 19:13:20.818251   54806 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:13:20.818272   54806 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:13:20.818297   54806 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0403 19:13:20.818331   54806 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:13:20.818354   54806 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:13:20.818374   54806 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0403 19:13:20.818331   54806 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0403 19:13:20.818789   54806 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:13:20.820148   54806 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0403 19:13:20.820220   54806 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:13:20.820235   54806 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0403 19:13:20.820334   54806 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:13:20.820338   54806 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:13:20.820395   54806 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:13:20.820573   54806 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0403 19:13:20.820805   54806 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:13:21.042515   54806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:13:21.048217   54806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0403 19:13:21.085439   54806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:13:21.088849   54806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:13:21.090168   54806 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0403 19:13:21.090216   54806 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:13:21.090261   54806 ssh_runner.go:195] Run: which crictl
	I0403 19:13:21.092341   54806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:13:21.100166   54806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0403 19:13:21.104996   54806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0403 19:13:21.106792   54806 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0403 19:13:21.106865   54806 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0403 19:13:21.106909   54806 ssh_runner.go:195] Run: which crictl
	I0403 19:13:21.197645   54806 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0403 19:13:21.197692   54806 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:13:21.197729   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:13:21.197739   54806 ssh_runner.go:195] Run: which crictl
	I0403 19:13:21.197748   54806 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0403 19:13:21.197785   54806 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:13:21.197825   54806 ssh_runner.go:195] Run: which crictl
	I0403 19:13:21.212525   54806 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0403 19:13:21.212569   54806 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:13:21.212612   54806 ssh_runner.go:195] Run: which crictl
	I0403 19:13:21.223331   54806 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0403 19:13:21.223375   54806 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0403 19:13:21.223344   54806 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0403 19:13:21.223412   54806 ssh_runner.go:195] Run: which crictl
	I0403 19:13:21.223435   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0403 19:13:21.223437   54806 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0403 19:13:21.223523   54806 ssh_runner.go:195] Run: which crictl
	I0403 19:13:21.252851   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:13:21.252854   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:13:21.252954   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:13:21.252961   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:13:21.284208   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0403 19:13:21.284249   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0403 19:13:21.284208   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0403 19:13:21.384779   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:13:21.384892   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:13:21.384911   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:13:21.384949   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:13:21.411640   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0403 19:13:21.437274   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0403 19:13:21.437304   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0403 19:13:21.545466   54806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0403 19:13:21.547148   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:13:21.547227   54806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0403 19:13:21.547262   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:13:21.547246   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:13:21.562580   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0403 19:13:21.572240   54806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0403 19:13:21.650540   54806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0403 19:13:21.650614   54806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0403 19:13:21.650637   54806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0403 19:13:21.654579   54806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0403 19:13:21.666515   54806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0403 19:13:22.121220   54806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:13:22.260907   54806 cache_images.go:92] duration metric: took 1.442695017s to LoadCachedImages
	W0403 19:13:22.260998   54806 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0403 19:13:22.261025   54806 kubeadm.go:934] updating node { 192.168.39.159 8443 v1.20.0 crio true true} ...
	I0403 19:13:22.261131   54806 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-523797 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-523797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0403 19:13:22.261217   54806 ssh_runner.go:195] Run: crio config
	I0403 19:13:22.312243   54806 cni.go:84] Creating CNI manager for ""
	I0403 19:13:22.312270   54806 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:13:22.312282   54806 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0403 19:13:22.312300   54806 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-523797 NodeName:kubernetes-upgrade-523797 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0403 19:13:22.312426   54806 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-523797"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0403 19:13:22.312483   54806 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0403 19:13:22.325145   54806 binaries.go:44] Found k8s binaries, skipping transfer
	I0403 19:13:22.325214   54806 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0403 19:13:22.337216   54806 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0403 19:13:22.356282   54806 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0403 19:13:22.373381   54806 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0403 19:13:22.393866   54806 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I0403 19:13:22.398267   54806 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:13:22.411085   54806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:13:22.528655   54806 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:13:22.544915   54806 certs.go:68] Setting up /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797 for IP: 192.168.39.159
	I0403 19:13:22.544942   54806 certs.go:194] generating shared ca certs ...
	I0403 19:13:22.544962   54806 certs.go:226] acquiring lock for ca certs: {Name:mk34f09724355b828d31eb1ee97771fd3b4645c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:13:22.545144   54806 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key
	I0403 19:13:22.545203   54806 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key
	I0403 19:13:22.545218   54806 certs.go:256] generating profile certs ...
	I0403 19:13:22.545292   54806 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/client.key
	I0403 19:13:22.545309   54806 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/client.crt with IP's: []
	I0403 19:13:22.735622   54806 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/client.crt ...
	I0403 19:13:22.735650   54806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/client.crt: {Name:mk91df64b23d8794695b960e8a6a9b6b4b19eb24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:13:22.735811   54806 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/client.key ...
	I0403 19:13:22.735824   54806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/client.key: {Name:mk6f8e4f685f0e7790aa14044ca7d2958ef9efd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:13:22.735897   54806 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.key.b5d7e4f2
	I0403 19:13:22.735913   54806 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.crt.b5d7e4f2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.159]
	I0403 19:13:23.412589   54806 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.crt.b5d7e4f2 ...
	I0403 19:13:23.412613   54806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.crt.b5d7e4f2: {Name:mkdb636017f54eb18308f117f328a626deef4117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:13:23.412798   54806 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.key.b5d7e4f2 ...
	I0403 19:13:23.412816   54806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.key.b5d7e4f2: {Name:mkc9f96dab16aa4ebcc8656bef73ae77220ecd60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:13:23.412918   54806 certs.go:381] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.crt.b5d7e4f2 -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.crt
	I0403 19:13:23.413031   54806 certs.go:385] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.key.b5d7e4f2 -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.key
	I0403 19:13:23.413129   54806 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/proxy-client.key
	I0403 19:13:23.413150   54806 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/proxy-client.crt with IP's: []
	I0403 19:13:23.699620   54806 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/proxy-client.crt ...
	I0403 19:13:23.699648   54806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/proxy-client.crt: {Name:mkcea923219af6cbef26d76bb9096ada6f164da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:13:23.699845   54806 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/proxy-client.key ...
	I0403 19:13:23.699864   54806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/proxy-client.key: {Name:mk17ab6e9560fe6a9e442057db4bf45d98d55b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:13:23.700091   54806 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem (1338 bytes)
	W0403 19:13:23.700134   54806 certs.go:480] ignoring /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552_empty.pem, impossibly tiny 0 bytes
	I0403 19:13:23.700149   54806 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem (1675 bytes)
	I0403 19:13:23.700193   54806 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem (1082 bytes)
	I0403 19:13:23.700223   54806 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem (1123 bytes)
	I0403 19:13:23.700267   54806 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem (1675 bytes)
	I0403 19:13:23.700328   54806 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:13:23.701000   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0403 19:13:23.726457   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0403 19:13:23.749619   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0403 19:13:23.778716   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0403 19:13:23.801888   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0403 19:13:23.824517   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0403 19:13:23.846667   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0403 19:13:23.869186   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0403 19:13:23.891619   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem --> /usr/share/ca-certificates/21552.pem (1338 bytes)
	I0403 19:13:23.914121   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /usr/share/ca-certificates/215522.pem (1708 bytes)
	I0403 19:13:23.936296   54806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0403 19:13:23.963063   54806 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0403 19:13:23.979062   54806 ssh_runner.go:195] Run: openssl version
	I0403 19:13:23.984486   54806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0403 19:13:23.994660   54806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:13:23.998652   54806 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  3 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:13:23.998694   54806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:13:24.004290   54806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0403 19:13:24.014145   54806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21552.pem && ln -fs /usr/share/ca-certificates/21552.pem /etc/ssl/certs/21552.pem"
	I0403 19:13:24.024983   54806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21552.pem
	I0403 19:13:24.028971   54806 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  3 18:20 /usr/share/ca-certificates/21552.pem
	I0403 19:13:24.029036   54806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21552.pem
	I0403 19:13:24.034277   54806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21552.pem /etc/ssl/certs/51391683.0"
	I0403 19:13:24.043951   54806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215522.pem && ln -fs /usr/share/ca-certificates/215522.pem /etc/ssl/certs/215522.pem"
	I0403 19:13:24.053868   54806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215522.pem
	I0403 19:13:24.057863   54806 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  3 18:20 /usr/share/ca-certificates/215522.pem
	I0403 19:13:24.057906   54806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215522.pem
	I0403 19:13:24.062878   54806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215522.pem /etc/ssl/certs/3ec20f2e.0"
	I0403 19:13:24.072289   54806 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0403 19:13:24.076087   54806 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0403 19:13:24.076134   54806 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-523797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-523797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:13:24.076219   54806 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0403 19:13:24.076264   54806 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0403 19:13:24.111058   54806 cri.go:89] found id: ""
	I0403 19:13:24.111152   54806 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0403 19:13:24.121112   54806 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0403 19:13:24.130485   54806 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:13:24.139707   54806 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 19:13:24.139735   54806 kubeadm.go:157] found existing configuration files:
	
	I0403 19:13:24.139786   54806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:13:24.148403   54806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 19:13:24.148469   54806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 19:13:24.157267   54806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:13:24.166323   54806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 19:13:24.166390   54806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 19:13:24.175190   54806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:13:24.183465   54806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 19:13:24.183509   54806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:13:24.192191   54806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:13:24.200707   54806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 19:13:24.200759   54806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:13:24.209744   54806 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0403 19:13:24.318249   54806 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0403 19:13:24.318384   54806 kubeadm.go:310] [preflight] Running pre-flight checks
	I0403 19:13:24.490744   54806 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0403 19:13:24.490873   54806 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0403 19:13:24.490981   54806 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0403 19:13:24.654540   54806 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0403 19:13:24.711546   54806 out.go:235]   - Generating certificates and keys ...
	I0403 19:13:24.711709   54806 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0403 19:13:24.711816   54806 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0403 19:13:24.797407   54806 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0403 19:13:25.021112   54806 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0403 19:13:25.239971   54806 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0403 19:13:25.531136   54806 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0403 19:13:25.643442   54806 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0403 19:13:25.643731   54806 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-523797 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	I0403 19:13:25.747004   54806 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0403 19:13:25.747308   54806 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-523797 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	I0403 19:13:26.032383   54806 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0403 19:13:26.242207   54806 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0403 19:13:26.338176   54806 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0403 19:13:26.338303   54806 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0403 19:13:26.647552   54806 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0403 19:13:26.824406   54806 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0403 19:13:26.947899   54806 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0403 19:13:27.062185   54806 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0403 19:13:27.087289   54806 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0403 19:13:27.088517   54806 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0403 19:13:27.088625   54806 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0403 19:13:27.239582   54806 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0403 19:13:27.241328   54806 out.go:235]   - Booting up control plane ...
	I0403 19:13:27.241467   54806 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0403 19:13:27.261109   54806 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0403 19:13:27.262146   54806 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0403 19:13:27.262940   54806 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0403 19:13:27.269268   54806 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0403 19:14:07.263638   54806 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0403 19:14:07.263981   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:14:07.264260   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:14:12.264955   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:14:12.265282   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:14:22.264514   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:14:22.264767   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:14:42.264388   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:14:42.264722   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:15:22.266067   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:15:22.266388   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:15:22.266417   54806 kubeadm.go:310] 
	I0403 19:15:22.266450   54806 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0403 19:15:22.266486   54806 kubeadm.go:310] 		timed out waiting for the condition
	I0403 19:15:22.266496   54806 kubeadm.go:310] 
	I0403 19:15:22.266524   54806 kubeadm.go:310] 	This error is likely caused by:
	I0403 19:15:22.266553   54806 kubeadm.go:310] 		- The kubelet is not running
	I0403 19:15:22.266651   54806 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0403 19:15:22.266658   54806 kubeadm.go:310] 
	I0403 19:15:22.266743   54806 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0403 19:15:22.266780   54806 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0403 19:15:22.266809   54806 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0403 19:15:22.266833   54806 kubeadm.go:310] 
	I0403 19:15:22.266976   54806 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0403 19:15:22.267045   54806 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0403 19:15:22.267054   54806 kubeadm.go:310] 
	I0403 19:15:22.267220   54806 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0403 19:15:22.267361   54806 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0403 19:15:22.267462   54806 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0403 19:15:22.267566   54806 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0403 19:15:22.267577   54806 kubeadm.go:310] 
	I0403 19:15:22.268358   54806 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:15:22.268457   54806 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0403 19:15:22.268547   54806 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0403 19:15:22.268694   54806 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-523797 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-523797 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-523797 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-523797 localhost] and IPs [192.168.39.159 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0403 19:15:22.268730   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0403 19:15:22.724890   54806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:15:22.739461   54806 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:15:22.748070   54806 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 19:15:22.748092   54806 kubeadm.go:157] found existing configuration files:
	
	I0403 19:15:22.748133   54806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:15:22.756117   54806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 19:15:22.756176   54806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 19:15:22.764397   54806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:15:22.772350   54806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 19:15:22.772402   54806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 19:15:22.781291   54806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:15:22.789712   54806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 19:15:22.789757   54806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:15:22.798406   54806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:15:22.807265   54806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 19:15:22.807305   54806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:15:22.816117   54806 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0403 19:15:23.007800   54806 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:17:19.561320   54806 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0403 19:17:19.561478   54806 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0403 19:17:19.562998   54806 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0403 19:17:19.563074   54806 kubeadm.go:310] [preflight] Running pre-flight checks
	I0403 19:17:19.563169   54806 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0403 19:17:19.563280   54806 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0403 19:17:19.563427   54806 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0403 19:17:19.563533   54806 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0403 19:17:19.565159   54806 out.go:235]   - Generating certificates and keys ...
	I0403 19:17:19.565266   54806 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0403 19:17:19.565376   54806 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0403 19:17:19.565503   54806 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0403 19:17:19.565584   54806 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0403 19:17:19.565683   54806 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0403 19:17:19.565753   54806 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0403 19:17:19.565839   54806 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0403 19:17:19.565918   54806 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0403 19:17:19.566039   54806 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0403 19:17:19.566171   54806 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0403 19:17:19.566242   54806 kubeadm.go:310] [certs] Using the existing "sa" key
	I0403 19:17:19.566325   54806 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0403 19:17:19.566400   54806 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0403 19:17:19.566475   54806 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0403 19:17:19.566537   54806 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0403 19:17:19.566581   54806 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0403 19:17:19.566724   54806 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0403 19:17:19.566846   54806 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0403 19:17:19.566917   54806 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0403 19:17:19.567030   54806 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0403 19:17:19.569083   54806 out.go:235]   - Booting up control plane ...
	I0403 19:17:19.569224   54806 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0403 19:17:19.569358   54806 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0403 19:17:19.569447   54806 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0403 19:17:19.569547   54806 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0403 19:17:19.569796   54806 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0403 19:17:19.569855   54806 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0403 19:17:19.569965   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:17:19.570271   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:17:19.570361   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:17:19.570602   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:17:19.570686   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:17:19.570985   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:17:19.571082   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:17:19.571339   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:17:19.571434   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:17:19.571690   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:17:19.571702   54806 kubeadm.go:310] 
	I0403 19:17:19.571751   54806 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0403 19:17:19.571809   54806 kubeadm.go:310] 		timed out waiting for the condition
	I0403 19:17:19.571819   54806 kubeadm.go:310] 
	I0403 19:17:19.571864   54806 kubeadm.go:310] 	This error is likely caused by:
	I0403 19:17:19.571912   54806 kubeadm.go:310] 		- The kubelet is not running
	I0403 19:17:19.572058   54806 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0403 19:17:19.572068   54806 kubeadm.go:310] 
	I0403 19:17:19.572183   54806 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0403 19:17:19.572229   54806 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0403 19:17:19.572272   54806 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0403 19:17:19.572281   54806 kubeadm.go:310] 
	I0403 19:17:19.572413   54806 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0403 19:17:19.572522   54806 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0403 19:17:19.572532   54806 kubeadm.go:310] 
	I0403 19:17:19.572668   54806 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0403 19:17:19.572744   54806 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0403 19:17:19.572808   54806 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0403 19:17:19.572870   54806 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0403 19:17:19.572927   54806 kubeadm.go:394] duration metric: took 3m55.49680016s to StartCluster
	I0403 19:17:19.572982   54806 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:17:19.573031   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:17:19.573084   54806 kubeadm.go:310] 
	I0403 19:17:19.628411   54806 cri.go:89] found id: ""
	I0403 19:17:19.628448   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.628460   54806 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:17:19.628469   54806 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:17:19.628556   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:17:19.670445   54806 cri.go:89] found id: ""
	I0403 19:17:19.670467   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.670475   54806 logs.go:284] No container was found matching "etcd"
	I0403 19:17:19.670481   54806 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:17:19.670536   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:17:19.718817   54806 cri.go:89] found id: ""
	I0403 19:17:19.718864   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.718876   54806 logs.go:284] No container was found matching "coredns"
	I0403 19:17:19.718885   54806 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:17:19.718946   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:17:19.769896   54806 cri.go:89] found id: ""
	I0403 19:17:19.769924   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.769945   54806 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:17:19.769953   54806 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:17:19.770011   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:17:19.818772   54806 cri.go:89] found id: ""
	I0403 19:17:19.818801   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.818812   54806 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:17:19.818839   54806 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:17:19.818904   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:17:19.869079   54806 cri.go:89] found id: ""
	I0403 19:17:19.869106   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.869117   54806 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:17:19.869128   54806 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:17:19.869205   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:17:19.911846   54806 cri.go:89] found id: ""
	I0403 19:17:19.911875   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.911887   54806 logs.go:284] No container was found matching "kindnet"
	I0403 19:17:19.911897   54806 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:17:19.911910   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:17:20.075082   54806 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:17:20.075112   54806 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:17:20.075127   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:17:20.234441   54806 logs.go:123] Gathering logs for container status ...
	I0403 19:17:20.234484   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:17:20.287481   54806 logs.go:123] Gathering logs for kubelet ...
	I0403 19:17:20.287510   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:17:20.351628   54806 logs.go:123] Gathering logs for dmesg ...
	I0403 19:17:20.351705   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0403 19:17:20.369787   54806 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0403 19:17:20.369863   54806 out.go:270] * 
	* 
	W0403 19:17:20.369930   54806 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:17:20.369948   54806 out.go:270] * 
	* 
	W0403 19:17:20.371150   54806 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0403 19:17:20.374570   54806 out.go:201] 
	W0403 19:17:20.375914   54806 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:17:20.376194   54806 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0403 19:17:20.376230   54806 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0403 19:17:20.380361   54806 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-523797 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-523797
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-523797: (2.644853606s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-523797 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-523797 status --format={{.Host}}: exit status 7 (73.623121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-523797 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-523797 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.425886486s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-523797 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-523797 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-523797 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (78.151593ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-523797] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-523797
	    minikube start -p kubernetes-upgrade-523797 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5237972 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-523797 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-523797 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-523797 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.084339345s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-03 19:19:35.845471297 +0000 UTC m=+4076.356346386
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-523797 -n kubernetes-upgrade-523797
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-523797 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-523797 logs -n 25: (1.600547509s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-999005 sudo                  | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-999005 sudo                  | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-999005 sudo                  | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-999005 sudo cat              | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-999005 sudo cat              | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-999005 sudo                  | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-999005 sudo                  | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-999005 sudo                  | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-999005 sudo find             | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-999005 sudo crio             | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-999005                       | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC | 03 Apr 25 19:17 UTC |
	| delete  | -p pause-942912                        | pause-942912              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC | 03 Apr 25 19:17 UTC |
	| start   | -p cert-expiration-954352              | cert-expiration-954352    | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC | 03 Apr 25 19:18 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-426227           | force-systemd-flag-426227 | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC | 03 Apr 25 19:18 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-413283              | stopped-upgrade-413283    | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC | 03 Apr 25 19:17 UTC |
	| start   | -p cert-options-528707                 | cert-options-528707       | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC | 03 Apr 25 19:19 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-523797           | kubernetes-upgrade-523797 | jenkins | v1.35.0 | 03 Apr 25 19:18 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-523797           | kubernetes-upgrade-523797 | jenkins | v1.35.0 | 03 Apr 25 19:18 UTC | 03 Apr 25 19:19 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-426227 ssh cat      | force-systemd-flag-426227 | jenkins | v1.35.0 | 03 Apr 25 19:18 UTC | 03 Apr 25 19:18 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-426227           | force-systemd-flag-426227 | jenkins | v1.35.0 | 03 Apr 25 19:18 UTC | 03 Apr 25 19:18 UTC |
	| start   | -p old-k8s-version-471019              | old-k8s-version-471019    | jenkins | v1.35.0 | 03 Apr 25 19:18 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| ssh     | cert-options-528707 ssh                | cert-options-528707       | jenkins | v1.35.0 | 03 Apr 25 19:19 UTC | 03 Apr 25 19:19 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-528707 -- sudo         | cert-options-528707       | jenkins | v1.35.0 | 03 Apr 25 19:19 UTC | 03 Apr 25 19:19 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-528707                 | cert-options-528707       | jenkins | v1.35.0 | 03 Apr 25 19:19 UTC | 03 Apr 25 19:19 UTC |
	| start   | -p embed-certs-840360                  | embed-certs-840360        | jenkins | v1.35.0 | 03 Apr 25 19:19 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 19:19:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 19:19:21.549148   63153 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:19:21.549433   63153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:19:21.549445   63153 out.go:358] Setting ErrFile to fd 2...
	I0403 19:19:21.549451   63153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:19:21.549777   63153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:19:21.550543   63153 out.go:352] Setting JSON to false
	I0403 19:19:21.551772   63153 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7307,"bootTime":1743700655,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:19:21.551902   63153 start.go:139] virtualization: kvm guest
	I0403 19:19:21.553401   63153 out.go:177] * [embed-certs-840360] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:19:21.554399   63153 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:19:21.554438   63153 notify.go:220] Checking for updates...
	I0403 19:19:21.556148   63153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:19:21.557119   63153 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:19:21.558011   63153 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:19:21.558946   63153 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:19:21.559837   63153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:19:21.561239   63153 config.go:182] Loaded profile config "cert-expiration-954352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:19:21.561375   63153 config.go:182] Loaded profile config "kubernetes-upgrade-523797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:19:21.561533   63153 config.go:182] Loaded profile config "old-k8s-version-471019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0403 19:19:21.561642   63153 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:19:21.596824   63153 out.go:177] * Using the kvm2 driver based on user configuration
	I0403 19:19:21.597641   63153 start.go:297] selected driver: kvm2
	I0403 19:19:21.597655   63153 start.go:901] validating driver "kvm2" against <nil>
	I0403 19:19:21.597669   63153 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:19:21.598675   63153 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:19:21.598761   63153 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 19:19:21.613548   63153 install.go:137] /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0403 19:19:21.613600   63153 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0403 19:19:21.613813   63153 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:19:21.613846   63153 cni.go:84] Creating CNI manager for ""
	I0403 19:19:21.613882   63153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:19:21.613896   63153 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0403 19:19:21.613958   63153 start.go:340] cluster config:
	{Name:embed-certs-840360 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-840360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:19:21.614053   63153 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:19:21.615665   63153 out.go:177] * Starting "embed-certs-840360" primary control-plane node in "embed-certs-840360" cluster
	I0403 19:19:19.521892   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:19.522362   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:19.522417   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:19.522357   62860 retry.go:31] will retry after 1.840778021s: waiting for domain to come up
	I0403 19:19:21.365048   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:21.365536   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:21.365564   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:21.365509   62860 retry.go:31] will retry after 1.397058601s: waiting for domain to come up
	I0403 19:19:22.764782   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:22.765369   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:22.765395   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:22.765333   62860 retry.go:31] will retry after 2.43355181s: waiting for domain to come up
	I0403 19:19:21.616588   63153 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 19:19:21.616636   63153 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0403 19:19:21.616648   63153 cache.go:56] Caching tarball of preloaded images
	I0403 19:19:21.616732   63153 preload.go:172] Found /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0403 19:19:21.616746   63153 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0403 19:19:21.616881   63153 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/embed-certs-840360/config.json ...
	I0403 19:19:21.616905   63153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/embed-certs-840360/config.json: {Name:mk4b8246ea334229aef6035f31485d42d7f4240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:19:21.617062   63153 start.go:360] acquireMachinesLock for embed-certs-840360: {Name:mk8972215f0ab94ca7966bf5adf18262e19bccd0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0403 19:19:25.200901   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:25.201310   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:25.201375   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:25.201278   62860 retry.go:31] will retry after 3.423338921s: waiting for domain to come up
	I0403 19:19:28.626779   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:28.627352   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:28.627379   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:28.627316   62860 retry.go:31] will retry after 3.071967317s: waiting for domain to come up
	I0403 19:19:27.980488   62017 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 50b1833edd2b4de6fa553f89cda5e12713a450a4d4edfa1b56c1d5019c9d7c54 c003de8cd2f18c2b0cd1ce89a742350469eaa9bb8cf0299840b1a80f4fd173f2 45b06d77742191041339bf6f070547253a14e2dde3aad31aea01d0f3fb584297 4b6eefd71abbc334b4ed4dfe1a41d3c1e8f4227f6a62139521ae33768b926e3b e3effc48c04a4ce18abe3e6cf1a20ee68785bc6ebd7a0615340fd8a4d1c10e54 1fe8003f07d8aa5b67f399a929851aef229e3f3c3131d4533f5928fdc0f33402 432bbbd74d309fd94fa9f1c8b229ef9775e9c10c011d357e9529098e5581e79a b253d372a18a1143f555e78805bb93b809d41f29794f0e14175ba82abb033c19 7faf30a5d5f851539f4d1d1d67128a544e33899a7399aec038e26b2b93d89899: (9.959977212s)
	I0403 19:19:27.980558   62017 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0403 19:19:28.032354   62017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:19:28.045484   62017 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Apr  3 19:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Apr  3 19:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Apr  3 19:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Apr  3 19:17 /etc/kubernetes/scheduler.conf
	
	I0403 19:19:28.045542   62017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:19:28.056425   62017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:19:28.065224   62017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:19:28.073581   62017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0403 19:19:28.073640   62017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:19:28.082332   62017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:19:28.090881   62017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0403 19:19:28.090929   62017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:19:28.099602   62017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0403 19:19:28.109060   62017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:19:28.166255   62017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:19:29.007329   62017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:19:29.221973   62017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:19:29.288082   62017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:19:29.408610   62017 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:19:29.408687   62017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:19:29.909401   62017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:19:30.409572   62017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:19:30.430312   62017 api_server.go:72] duration metric: took 1.021701192s to wait for apiserver process to appear ...
	I0403 19:19:30.430339   62017 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:19:30.430367   62017 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0403 19:19:31.702571   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:31.703040   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:31.703069   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:31.703011   62860 retry.go:31] will retry after 4.304834953s: waiting for domain to come up
	I0403 19:19:32.763021   62017 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0403 19:19:32.763056   62017 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0403 19:19:32.763079   62017 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0403 19:19:32.794010   62017 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0403 19:19:32.794037   62017 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0403 19:19:32.931381   62017 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0403 19:19:32.935290   62017 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0403 19:19:32.935312   62017 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0403 19:19:33.430919   62017 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0403 19:19:33.435485   62017 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0403 19:19:33.435514   62017 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0403 19:19:33.931266   62017 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0403 19:19:33.944636   62017 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0403 19:19:33.944666   62017 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0403 19:19:34.431373   62017 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0403 19:19:34.435700   62017 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I0403 19:19:34.442064   62017 api_server.go:141] control plane version: v1.32.2
	I0403 19:19:34.442087   62017 api_server.go:131] duration metric: took 4.011741853s to wait for apiserver health ...
	I0403 19:19:34.442096   62017 cni.go:84] Creating CNI manager for ""
	I0403 19:19:34.442102   62017 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:19:34.443831   62017 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0403 19:19:34.444752   62017 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0403 19:19:34.454605   62017 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0403 19:19:34.470962   62017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:19:34.474029   62017 system_pods.go:59] 8 kube-system pods found
	I0403 19:19:34.474063   62017 system_pods.go:61] "coredns-668d6bf9bc-l72qz" [50ab8c40-2dcd-4ad9-ae7c-88a924660006] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0403 19:19:34.474082   62017 system_pods.go:61] "coredns-668d6bf9bc-mx6nt" [0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0403 19:19:34.474094   62017 system_pods.go:61] "etcd-kubernetes-upgrade-523797" [d7fd351c-6f0e-439b-bf2f-3558c52a91d2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0403 19:19:34.474100   62017 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-523797" [f9ce57a1-3ef9-4006-b850-45f02ab82610] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0403 19:19:34.474108   62017 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-523797" [74e73959-4b8a-4d4b-ad53-f2567bf94cce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0403 19:19:34.474116   62017 system_pods.go:61] "kube-proxy-2gxwk" [2a43667b-2092-429c-b9ba-ad5a3186962a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0403 19:19:34.474122   62017 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-523797" [3c68f3c8-ad67-4126-adb2-f227121f76ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0403 19:19:34.474129   62017 system_pods.go:61] "storage-provisioner" [1e803e42-8aa0-47f0-8cb2-4eb30f5f632b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0403 19:19:34.474134   62017 system_pods.go:74] duration metric: took 3.149951ms to wait for pod list to return data ...
	I0403 19:19:34.474143   62017 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:19:34.476109   62017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:19:34.476132   62017 node_conditions.go:123] node cpu capacity is 2
	I0403 19:19:34.476145   62017 node_conditions.go:105] duration metric: took 1.997109ms to run NodePressure ...
	I0403 19:19:34.476162   62017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:19:34.732054   62017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0403 19:19:34.742782   62017 ops.go:34] apiserver oom_adj: -16
	I0403 19:19:34.742802   62017 kubeadm.go:597] duration metric: took 16.80282902s to restartPrimaryControlPlane
	I0403 19:19:34.742810   62017 kubeadm.go:394] duration metric: took 17.045558009s to StartCluster
	I0403 19:19:34.742839   62017 settings.go:142] acquiring lock: {Name:mk92384ef10350a2ea8b1710e0b74c72a0214398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:19:34.742930   62017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:19:34.743703   62017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/kubeconfig: {Name:mk73cf36c30cb8628f68d205e8cb0818a9975b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:19:34.743955   62017 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:19:34.744071   62017 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0403 19:19:34.744159   62017 config.go:182] Loaded profile config "kubernetes-upgrade-523797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:19:34.744168   62017 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-523797"
	I0403 19:19:34.744188   62017 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-523797"
	W0403 19:19:34.744200   62017 addons.go:247] addon storage-provisioner should already be in state true
	I0403 19:19:34.744212   62017 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-523797"
	I0403 19:19:34.744233   62017 host.go:66] Checking if "kubernetes-upgrade-523797" exists ...
	I0403 19:19:34.744239   62017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-523797"
	I0403 19:19:34.744537   62017 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:19:34.744574   62017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:19:34.744604   62017 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:19:34.744644   62017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:19:34.745625   62017 out.go:177] * Verifying Kubernetes components...
	I0403 19:19:34.746759   62017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:19:34.759240   62017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43769
	I0403 19:19:34.759521   62017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43829
	I0403 19:19:34.759646   62017 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:19:34.759980   62017 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:19:34.760183   62017 main.go:141] libmachine: Using API Version  1
	I0403 19:19:34.760203   62017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:19:34.760456   62017 main.go:141] libmachine: Using API Version  1
	I0403 19:19:34.760478   62017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:19:34.760504   62017 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:19:34.760823   62017 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:19:34.761014   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetState
	I0403 19:19:34.761098   62017 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:19:34.761142   62017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:19:34.763286   62017 kapi.go:59] client config for kubernetes-upgrade-523797: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/client.crt", KeyFile:"/home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/client.key", CAFile:"/home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0403 19:19:34.763540   62017 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-523797"
	W0403 19:19:34.763555   62017 addons.go:247] addon default-storageclass should already be in state true
	I0403 19:19:34.763581   62017 host.go:66] Checking if "kubernetes-upgrade-523797" exists ...
	I0403 19:19:34.763836   62017 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:19:34.763875   62017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:19:34.778216   62017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0403 19:19:34.778258   62017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37333
	I0403 19:19:34.778743   62017 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:19:34.778751   62017 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:19:34.779226   62017 main.go:141] libmachine: Using API Version  1
	I0403 19:19:34.779255   62017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:19:34.779331   62017 main.go:141] libmachine: Using API Version  1
	I0403 19:19:34.779348   62017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:19:34.779609   62017 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:19:34.779646   62017 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:19:34.779819   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetState
	I0403 19:19:34.780136   62017 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:19:34.780169   62017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:19:34.781558   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:19:34.783271   62017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:19:34.784323   62017 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:19:34.784338   62017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0403 19:19:34.784351   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:19:34.787579   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:19:34.788060   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:19:34.788084   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:19:34.788282   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:19:34.788474   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:19:34.788636   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:19:34.788787   62017 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/id_rsa Username:docker}
	I0403 19:19:34.795784   62017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33237
	I0403 19:19:34.796133   62017 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:19:34.796561   62017 main.go:141] libmachine: Using API Version  1
	I0403 19:19:34.796579   62017 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:19:34.796859   62017 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:19:34.797043   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetState
	I0403 19:19:34.798399   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:19:34.798545   62017 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0403 19:19:34.798558   62017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0403 19:19:34.798569   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHHostname
	I0403 19:19:34.801478   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:19:34.801893   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:b5:19", ip: ""} in network mk-kubernetes-upgrade-523797: {Iface:virbr1 ExpiryTime:2025-04-03 20:13:06 +0000 UTC Type:0 Mac:52:54:00:47:b5:19 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:kubernetes-upgrade-523797 Clientid:01:52:54:00:47:b5:19}
	I0403 19:19:34.801972   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | domain kubernetes-upgrade-523797 has defined IP address 192.168.39.159 and MAC address 52:54:00:47:b5:19 in network mk-kubernetes-upgrade-523797
	I0403 19:19:34.802144   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHPort
	I0403 19:19:34.802290   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHKeyPath
	I0403 19:19:34.802430   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetSSHUsername
	I0403 19:19:34.802553   62017 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/kubernetes-upgrade-523797/id_rsa Username:docker}
	I0403 19:19:34.917659   62017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:19:34.935231   62017 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:19:34.935304   62017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:19:34.947879   62017 api_server.go:72] duration metric: took 203.888506ms to wait for apiserver process to appear ...
	I0403 19:19:34.947913   62017 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:19:34.947929   62017 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0403 19:19:34.954047   62017 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I0403 19:19:34.954969   62017 api_server.go:141] control plane version: v1.32.2
	I0403 19:19:34.954987   62017 api_server.go:131] duration metric: took 7.069158ms to wait for apiserver health ...
	I0403 19:19:34.954994   62017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:19:34.958093   62017 system_pods.go:59] 8 kube-system pods found
	I0403 19:19:34.958120   62017 system_pods.go:61] "coredns-668d6bf9bc-l72qz" [50ab8c40-2dcd-4ad9-ae7c-88a924660006] Running
	I0403 19:19:34.958132   62017 system_pods.go:61] "coredns-668d6bf9bc-mx6nt" [0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0403 19:19:34.958143   62017 system_pods.go:61] "etcd-kubernetes-upgrade-523797" [d7fd351c-6f0e-439b-bf2f-3558c52a91d2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0403 19:19:34.958154   62017 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-523797" [f9ce57a1-3ef9-4006-b850-45f02ab82610] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0403 19:19:34.958167   62017 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-523797" [74e73959-4b8a-4d4b-ad53-f2567bf94cce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0403 19:19:34.958176   62017 system_pods.go:61] "kube-proxy-2gxwk" [2a43667b-2092-429c-b9ba-ad5a3186962a] Running
	I0403 19:19:34.958185   62017 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-523797" [3c68f3c8-ad67-4126-adb2-f227121f76ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0403 19:19:34.958192   62017 system_pods.go:61] "storage-provisioner" [1e803e42-8aa0-47f0-8cb2-4eb30f5f632b] Running
	I0403 19:19:34.958198   62017 system_pods.go:74] duration metric: took 3.199446ms to wait for pod list to return data ...
	I0403 19:19:34.958211   62017 kubeadm.go:582] duration metric: took 214.224642ms to wait for: map[apiserver:true system_pods:true]
	I0403 19:19:34.958229   62017 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:19:34.960492   62017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:19:34.960506   62017 node_conditions.go:123] node cpu capacity is 2
	I0403 19:19:34.960515   62017 node_conditions.go:105] duration metric: took 2.279238ms to run NodePressure ...
	I0403 19:19:34.960524   62017 start.go:241] waiting for startup goroutines ...
	I0403 19:19:35.069491   62017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0403 19:19:35.084937   62017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:19:35.320159   62017 main.go:141] libmachine: Making call to close driver server
	I0403 19:19:35.320192   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .Close
	I0403 19:19:35.320487   62017 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:19:35.320504   62017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:19:35.320513   62017 main.go:141] libmachine: Making call to close driver server
	I0403 19:19:35.320530   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | Closing plugin on server side
	I0403 19:19:35.320572   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .Close
	I0403 19:19:35.320813   62017 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:19:35.320828   62017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:19:35.326622   62017 main.go:141] libmachine: Making call to close driver server
	I0403 19:19:35.326638   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .Close
	I0403 19:19:35.326881   62017 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:19:35.326901   62017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:19:35.779640   62017 main.go:141] libmachine: Making call to close driver server
	I0403 19:19:35.779661   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .Close
	I0403 19:19:35.779936   62017 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:19:35.779958   62017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:19:35.779967   62017 main.go:141] libmachine: Making call to close driver server
	I0403 19:19:35.779987   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | Closing plugin on server side
	I0403 19:19:35.780054   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .Close
	I0403 19:19:35.780286   62017 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:19:35.780340   62017 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:19:35.780315   62017 main.go:141] libmachine: (kubernetes-upgrade-523797) DBG | Closing plugin on server side
	I0403 19:19:35.782140   62017 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0403 19:19:35.783201   62017 addons.go:514] duration metric: took 1.039134621s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0403 19:19:35.783241   62017 start.go:246] waiting for cluster config update ...
	I0403 19:19:35.783252   62017 start.go:255] writing updated cluster config ...
	I0403 19:19:35.783555   62017 ssh_runner.go:195] Run: rm -f paused
	I0403 19:19:35.832238   62017 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0403 19:19:35.833822   62017 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-523797" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.570578497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=757d3d0e-6101-4dec-82b8-de6863e03a88 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.571940652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77719420-693d-448f-a2c1-8ca24616cc39 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.573100750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707976573069022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77719420-693d-448f-a2c1-8ca24616cc39 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.573580617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c5ce1d7-faaa-4844-9c18-b045cb57ee22 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.573651240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c5ce1d7-faaa-4844-9c18-b045cb57ee22 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.574184774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75a8593d72be30d863bf1ec54b93ca46a4186f7a0f01dd9e8a0acd48169d754a,PodSandboxId:ecd782b1da6947a2e4102388f13203a6882f52faf2ea671fbf767b2385b8f987,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707973672478692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2gxwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43667b-2092-429c-b9ba-ad5a3186962a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c014f73bce55ce8c8b40c8230be6c7e082fb40190bfcc141c714c36e95bd353,PodSandboxId:7b633b1579a390762164bb2c447ba0a68684302587261fca8e5769d7f6bfa983,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743707973671252621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d3b3638a4285cf9f539b771ea5264e3b6f4d2703cc288058653e45b4dbcc19,PodSandboxId:42977c397eb49574fcbf06f5a171459bf7bdd482e147fa8c9be29c9793875fef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707973651194979,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mx6nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f1e78f562ef101f57a1ea47a733b2006c25609dfc278e24373c62b254aaa0c,PodSandboxId:5867ec7883569b113c04539cce14212696f64047a25c4841e6c3af9d20c284a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707973643488895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l72qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ab8c40-2dcd-4ad9-ae7c-88a
924660006,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73a90107e6d3a56878972de3e336ee9c09c0bdd3b6f398e419c6266e87b63d1,PodSandboxId:989d2e7ecd745c6d9e0397174c8e24d28c19ba0e3f529109d9b0b98a13f4f879,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707969813948867,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61777891b6b70bbff690ad7c2eebc9d7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da09ffa954ba395fb3bb187fd404bbfef6165bba4a31915604df245601b2b58e,PodSandboxId:2d002d1fe193c5deabb79a66cb54ae2ac671f81ddddb34ff53d8fce6677c0545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707969828382326,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c4ba2af3ce6152481609090dbc607f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e43078d1fb6cbde979399d6e556fe1149c41404f3dc381bc8cbc5fbc945846,PodSandboxId:2ec98cff3f5b1b354324356599a2dea55db3d45cf4c5d9b93272d49dc4f1bcdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707969791665004,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a21a8ab61972e32d42d682d2635bc55,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6ee4f438ba15e4cb9de6eb60baf46a0a5d7ed8e90fc1801fd65e2c6fca7b42,PodSandboxId:ee84e7546b40567f38c71a2b380299bd9573bcd193bc8597b78c094da973a226,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707966694566463,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f871a1c495dae713034d9837eb01f54,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b1833edd2b4de6fa553f89cda5e12713a450a4d4edfa1b56c1d5019c9d7c54,PodSandboxId:5867ec7883569b113c04539cce14212696f64047a25c4841e6c3af9d20c284a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707957640751620,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l72qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ab8c40-2dcd-4ad9-ae7c-88a924660006,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b06d77742191041339bf6f070547253a14e2dde3aad31aea01d0f3fb584297,PodSandboxId:a33b586ef857284936459da10e44cc2fe9f8e96b9546fe425b137d8a726c813b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707954851935468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2gxwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43667b-2092-429c-b9ba-ad5a3186962a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c003de8cd2f18c2b0cd1ce89a742350469eaa9bb8cf0299840b1a80f4fd173f2,PodSandboxId:913a20dd599f6e9bbcc6f7fc4946762cca29dfe2bb05e63290ed4f88971b9da8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707955023996198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mx6nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3effc48c04a4ce18abe3e6cf1a20ee68785bc6ebd7a0615340fd8a4d1c10e54,PodSandboxId:51c1af6ee3ccae229efd3dc134bf446e765c0ad3d2c84d7a21be2e7c43ea1d94,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1743707954177200090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6eefd71abbc334b4ed4dfe1a41d3c1e8f4227f6a62139521ae33768b926e3b,PodSandboxId:472c65e103e5a7fc483df4a522b4db73537e2a826487592834d0129e932a046b,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707954185985287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c4ba2af3ce6152481609090dbc607f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe8003f07d8aa5b67f399a929851aef229e3f3c3131d4533f5928fdc0f33402,PodSandboxId:6e876a57503a81e19deaa0362f7b1c50dc617e152cedd14f0cd29b2fc0845ef8,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707954169722715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a21a8ab61972e32d42d682d2635bc55,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432bbbd74d309fd94fa9f1c8b229ef9775e9c10c011d357e9529098e5581e79a,PodSandboxId:ad027959cbc39b45d837597757b9f95f0ae329f627ca1450cc014709f7539a18,Metadata:&ContainerMetadata{Name:etcd,Atte
mpt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707954091722599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61777891b6b70bbff690ad7c2eebc9d7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b253d372a18a1143f555e78805bb93b809d41f29794f0e14175ba82abb033c19,PodSandboxId:45865879984c4c0282b0788c3bc09042f9c7ecd039342affe2433e21282efb3f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707954045852298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f871a1c495dae713034d9837eb01f54,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c5ce1d7-faaa-4844-9c18-b045cb57ee22 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.607798003Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5712feff-c576-4204-959d-4f691d524f2b name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.608184638Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:42977c397eb49574fcbf06f5a171459bf7bdd482e147fa8c9be29c9793875fef,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-mx6nt,Uid:0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1743707957257830697,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-mx6nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-03T19:18:04.397969895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5867ec7883569b113c04539cce14212696f64047a25c4841e6c3af9d20c284a2,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-l72qz,Uid:50ab8c40-2dcd-4ad9-ae7c-88a924660006,Namespac
e:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1743707957247374293,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-l72qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ab8c40-2dcd-4ad9-ae7c-88a924660006,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-03T19:18:04.379566957Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b633b1579a390762164bb2c447ba0a68684302587261fca8e5769d7f6bfa983,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1743707956908763981,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,},An
notations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-04-03T19:18:05.549684017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ecd782b1da6947a2e4102388f13203a6882f52faf2ea671fbf767b2385b8f987,Metadata:&PodSandboxMetadata{Name:kube-proxy-2gxwk,Uid:2a43667b-2092-429c-b9ba-ad5a3186962a,N
amespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1743707956907792159,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2gxwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43667b-2092-429c-b9ba-ad5a3186962a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-03T19:18:04.463288304Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ec98cff3f5b1b354324356599a2dea55db3d45cf4c5d9b93272d49dc4f1bcdd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-523797,Uid:7a21a8ab61972e32d42d682d2635bc55,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1743707956907096372,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a21a8ab61972e32d42d682
d2635bc55,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7a21a8ab61972e32d42d682d2635bc55,kubernetes.io/config.seen: 2025-04-03T19:17:51.989876305Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2d002d1fe193c5deabb79a66cb54ae2ac671f81ddddb34ff53d8fce6677c0545,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-523797,Uid:d6c4ba2af3ce6152481609090dbc607f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1743707956845220930,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c4ba2af3ce6152481609090dbc607f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.159:8443,kubernetes.io/config.hash: d6c4ba2af3ce6152481609090dbc607f,kubernetes.io/config.seen: 2025-04-03T19:17:51.989873484Z,kubernetes.i
o/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee84e7546b40567f38c71a2b380299bd9573bcd193bc8597b78c094da973a226,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-523797,Uid:2f871a1c495dae713034d9837eb01f54,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1743707956843559857,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f871a1c495dae713034d9837eb01f54,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2f871a1c495dae713034d9837eb01f54,kubernetes.io/config.seen: 2025-04-03T19:17:51.989874891Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:989d2e7ecd745c6d9e0397174c8e24d28c19ba0e3f529109d9b0b98a13f4f879,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-523797,Uid:61777891b6b70bbff690ad7c2eebc9d7,Namespace:kube-system,Atte
mpt:2,},State:SANDBOX_READY,CreatedAt:1743707956810283791,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61777891b6b70bbff690ad7c2eebc9d7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.159:2379,kubernetes.io/config.hash: 61777891b6b70bbff690ad7c2eebc9d7,kubernetes.io/config.seen: 2025-04-03T19:17:51.989869536Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:913a20dd599f6e9bbcc6f7fc4946762cca29dfe2bb05e63290ed4f88971b9da8,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-mx6nt,Uid:0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1743707954066104164,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-mx6nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-03T19:18:04.397969895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a33b586ef857284936459da10e44cc2fe9f8e96b9546fe425b137d8a726c813b,Metadata:&PodSandboxMetadata{Name:kube-proxy-2gxwk,Uid:2a43667b-2092-429c-b9ba-ad5a3186962a,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1743707953789919143,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2gxwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43667b-2092-429c-b9ba-ad5a3186962a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-03T19:18:04.463288304Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:51c1af6ee3ccae229efd3dc134bf446e765c0ad3d2c84d7a21be2e7c43ea1d94,Metadata:&PodSandboxMetadat
a{Name:storage-provisioner,Uid:1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1743707953587087317,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\
"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-04-03T19:18:05.549684017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45865879984c4c0282b0788c3bc09042f9c7ecd039342affe2433e21282efb3f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-523797,Uid:2f871a1c495dae713034d9837eb01f54,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1743707953519874317,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f871a1c495dae713034d9837eb01f54,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2f871a1c495dae713034d9837eb01f54,kubernetes.io/config.seen: 2025-04-03T19:17:51.989874891Z,kubernetes.io/config.source: f
ile,},RuntimeHandler:,},&PodSandbox{Id:6e876a57503a81e19deaa0362f7b1c50dc617e152cedd14f0cd29b2fc0845ef8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-523797,Uid:7a21a8ab61972e32d42d682d2635bc55,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1743707953506634404,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a21a8ab61972e32d42d682d2635bc55,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7a21a8ab61972e32d42d682d2635bc55,kubernetes.io/config.seen: 2025-04-03T19:17:51.989876305Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ad027959cbc39b45d837597757b9f95f0ae329f627ca1450cc014709f7539a18,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-523797,Uid:61777891b6b70bbff690ad7c2eebc9d7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1
743707953440890477,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61777891b6b70bbff690ad7c2eebc9d7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.159:2379,kubernetes.io/config.hash: 61777891b6b70bbff690ad7c2eebc9d7,kubernetes.io/config.seen: 2025-04-03T19:17:51.989869536Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:472c65e103e5a7fc483df4a522b4db73537e2a826487592834d0129e932a046b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-523797,Uid:d6c4ba2af3ce6152481609090dbc607f,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1743707953431768332,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d6c4ba2af3ce6152481609090dbc607f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.159:8443,kubernetes.io/config.hash: d6c4ba2af3ce6152481609090dbc607f,kubernetes.io/config.seen: 2025-04-03T19:17:51.989873484Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5712feff-c576-4204-959d-4f691d524f2b name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.609428752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=430738e3-ab62-4617-a7b5-2beed6e0e887 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.609511234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=430738e3-ab62-4617-a7b5-2beed6e0e887 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.609826438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75a8593d72be30d863bf1ec54b93ca46a4186f7a0f01dd9e8a0acd48169d754a,PodSandboxId:ecd782b1da6947a2e4102388f13203a6882f52faf2ea671fbf767b2385b8f987,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707973672478692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2gxwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43667b-2092-429c-b9ba-ad5a3186962a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c014f73bce55ce8c8b40c8230be6c7e082fb40190bfcc141c714c36e95bd353,PodSandboxId:7b633b1579a390762164bb2c447ba0a68684302587261fca8e5769d7f6bfa983,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743707973671252621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d3b3638a4285cf9f539b771ea5264e3b6f4d2703cc288058653e45b4dbcc19,PodSandboxId:42977c397eb49574fcbf06f5a171459bf7bdd482e147fa8c9be29c9793875fef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707973651194979,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mx6nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f1e78f562ef101f57a1ea47a733b2006c25609dfc278e24373c62b254aaa0c,PodSandboxId:5867ec7883569b113c04539cce14212696f64047a25c4841e6c3af9d20c284a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707973643488895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l72qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ab8c40-2dcd-4ad9-ae7c-88a
924660006,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73a90107e6d3a56878972de3e336ee9c09c0bdd3b6f398e419c6266e87b63d1,PodSandboxId:989d2e7ecd745c6d9e0397174c8e24d28c19ba0e3f529109d9b0b98a13f4f879,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707969813948867,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61777891b6b70bbff690ad7c2eebc9d7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da09ffa954ba395fb3bb187fd404bbfef6165bba4a31915604df245601b2b58e,PodSandboxId:2d002d1fe193c5deabb79a66cb54ae2ac671f81ddddb34ff53d8fce6677c0545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707969828382326,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c4ba2af3ce6152481609090dbc607f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e43078d1fb6cbde979399d6e556fe1149c41404f3dc381bc8cbc5fbc945846,PodSandboxId:2ec98cff3f5b1b354324356599a2dea55db3d45cf4c5d9b93272d49dc4f1bcdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707969791665004,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a21a8ab61972e32d42d682d2635bc55,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6ee4f438ba15e4cb9de6eb60baf46a0a5d7ed8e90fc1801fd65e2c6fca7b42,PodSandboxId:ee84e7546b40567f38c71a2b380299bd9573bcd193bc8597b78c094da973a226,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707966694566463,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f871a1c495dae713034d9837eb01f54,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b1833edd2b4de6fa553f89cda5e12713a450a4d4edfa1b56c1d5019c9d7c54,PodSandboxId:5867ec7883569b113c04539cce14212696f64047a25c4841e6c3af9d20c284a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707957640751620,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l72qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ab8c40-2dcd-4ad9-ae7c-88a924660006,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b06d77742191041339bf6f070547253a14e2dde3aad31aea01d0f3fb584297,PodSandboxId:a33b586ef857284936459da10e44cc2fe9f8e96b9546fe425b137d8a726c813b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707954851935468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2gxwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43667b-2092-429c-b9ba-ad5a3186962a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c003de8cd2f18c2b0cd1ce89a742350469eaa9bb8cf0299840b1a80f4fd173f2,PodSandboxId:913a20dd599f6e9bbcc6f7fc4946762cca29dfe2bb05e63290ed4f88971b9da8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707955023996198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mx6nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3effc48c04a4ce18abe3e6cf1a20ee68785bc6ebd7a0615340fd8a4d1c10e54,PodSandboxId:51c1af6ee3ccae229efd3dc134bf446e765c0ad3d2c84d7a21be2e7c43ea1d94,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1743707954177200090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6eefd71abbc334b4ed4dfe1a41d3c1e8f4227f6a62139521ae33768b926e3b,PodSandboxId:472c65e103e5a7fc483df4a522b4db73537e2a826487592834d0129e932a046b,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707954185985287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c4ba2af3ce6152481609090dbc607f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe8003f07d8aa5b67f399a929851aef229e3f3c3131d4533f5928fdc0f33402,PodSandboxId:6e876a57503a81e19deaa0362f7b1c50dc617e152cedd14f0cd29b2fc0845ef8,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707954169722715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a21a8ab61972e32d42d682d2635bc55,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432bbbd74d309fd94fa9f1c8b229ef9775e9c10c011d357e9529098e5581e79a,PodSandboxId:ad027959cbc39b45d837597757b9f95f0ae329f627ca1450cc014709f7539a18,Metadata:&ContainerMetadata{Name:etcd,Atte
mpt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707954091722599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61777891b6b70bbff690ad7c2eebc9d7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b253d372a18a1143f555e78805bb93b809d41f29794f0e14175ba82abb033c19,PodSandboxId:45865879984c4c0282b0788c3bc09042f9c7ecd039342affe2433e21282efb3f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707954045852298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f871a1c495dae713034d9837eb01f54,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=430738e3-ab62-4617-a7b5-2beed6e0e887 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.631589991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62084507-f5a8-4880-89f4-ff87c86790e9 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.631681162Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62084507-f5a8-4880-89f4-ff87c86790e9 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.632568032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2897445-fb95-4fdd-b4fd-16ccba5ffb1e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.633069228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707976633044076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2897445-fb95-4fdd-b4fd-16ccba5ffb1e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.633690160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af9739bb-8e89-4b77-8cbb-252d4569d0da name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.633759481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af9739bb-8e89-4b77-8cbb-252d4569d0da name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.634356387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75a8593d72be30d863bf1ec54b93ca46a4186f7a0f01dd9e8a0acd48169d754a,PodSandboxId:ecd782b1da6947a2e4102388f13203a6882f52faf2ea671fbf767b2385b8f987,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707973672478692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2gxwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43667b-2092-429c-b9ba-ad5a3186962a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c014f73bce55ce8c8b40c8230be6c7e082fb40190bfcc141c714c36e95bd353,PodSandboxId:7b633b1579a390762164bb2c447ba0a68684302587261fca8e5769d7f6bfa983,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743707973671252621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d3b3638a4285cf9f539b771ea5264e3b6f4d2703cc288058653e45b4dbcc19,PodSandboxId:42977c397eb49574fcbf06f5a171459bf7bdd482e147fa8c9be29c9793875fef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707973651194979,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mx6nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f1e78f562ef101f57a1ea47a733b2006c25609dfc278e24373c62b254aaa0c,PodSandboxId:5867ec7883569b113c04539cce14212696f64047a25c4841e6c3af9d20c284a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707973643488895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l72qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ab8c40-2dcd-4ad9-ae7c-88a
924660006,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73a90107e6d3a56878972de3e336ee9c09c0bdd3b6f398e419c6266e87b63d1,PodSandboxId:989d2e7ecd745c6d9e0397174c8e24d28c19ba0e3f529109d9b0b98a13f4f879,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707969813948867,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61777891b6b70bbff690ad7c2eebc9d7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da09ffa954ba395fb3bb187fd404bbfef6165bba4a31915604df245601b2b58e,PodSandboxId:2d002d1fe193c5deabb79a66cb54ae2ac671f81ddddb34ff53d8fce6677c0545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707969828382326,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c4ba2af3ce6152481609090dbc607f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e43078d1fb6cbde979399d6e556fe1149c41404f3dc381bc8cbc5fbc945846,PodSandboxId:2ec98cff3f5b1b354324356599a2dea55db3d45cf4c5d9b93272d49dc4f1bcdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707969791665004,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a21a8ab61972e32d42d682d2635bc55,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6ee4f438ba15e4cb9de6eb60baf46a0a5d7ed8e90fc1801fd65e2c6fca7b42,PodSandboxId:ee84e7546b40567f38c71a2b380299bd9573bcd193bc8597b78c094da973a226,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707966694566463,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f871a1c495dae713034d9837eb01f54,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b1833edd2b4de6fa553f89cda5e12713a450a4d4edfa1b56c1d5019c9d7c54,PodSandboxId:5867ec7883569b113c04539cce14212696f64047a25c4841e6c3af9d20c284a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707957640751620,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l72qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ab8c40-2dcd-4ad9-ae7c-88a924660006,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b06d77742191041339bf6f070547253a14e2dde3aad31aea01d0f3fb584297,PodSandboxId:a33b586ef857284936459da10e44cc2fe9f8e96b9546fe425b137d8a726c813b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707954851935468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2gxwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43667b-2092-429c-b9ba-ad5a3186962a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c003de8cd2f18c2b0cd1ce89a742350469eaa9bb8cf0299840b1a80f4fd173f2,PodSandboxId:913a20dd599f6e9bbcc6f7fc4946762cca29dfe2bb05e63290ed4f88971b9da8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707955023996198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mx6nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3effc48c04a4ce18abe3e6cf1a20ee68785bc6ebd7a0615340fd8a4d1c10e54,PodSandboxId:51c1af6ee3ccae229efd3dc134bf446e765c0ad3d2c84d7a21be2e7c43ea1d94,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1743707954177200090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6eefd71abbc334b4ed4dfe1a41d3c1e8f4227f6a62139521ae33768b926e3b,PodSandboxId:472c65e103e5a7fc483df4a522b4db73537e2a826487592834d0129e932a046b,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707954185985287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c4ba2af3ce6152481609090dbc607f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe8003f07d8aa5b67f399a929851aef229e3f3c3131d4533f5928fdc0f33402,PodSandboxId:6e876a57503a81e19deaa0362f7b1c50dc617e152cedd14f0cd29b2fc0845ef8,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707954169722715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a21a8ab61972e32d42d682d2635bc55,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432bbbd74d309fd94fa9f1c8b229ef9775e9c10c011d357e9529098e5581e79a,PodSandboxId:ad027959cbc39b45d837597757b9f95f0ae329f627ca1450cc014709f7539a18,Metadata:&ContainerMetadata{Name:etcd,Atte
mpt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707954091722599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61777891b6b70bbff690ad7c2eebc9d7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b253d372a18a1143f555e78805bb93b809d41f29794f0e14175ba82abb033c19,PodSandboxId:45865879984c4c0282b0788c3bc09042f9c7ecd039342affe2433e21282efb3f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707954045852298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f871a1c495dae713034d9837eb01f54,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af9739bb-8e89-4b77-8cbb-252d4569d0da name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.684945818Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=059872e3-ec31-443a-82e9-0ade79c1937a name=/runtime.v1.RuntimeService/Version
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.685019076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=059872e3-ec31-443a-82e9-0ade79c1937a name=/runtime.v1.RuntimeService/Version
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.686282534Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d28b038f-6ba0-4ab3-b37d-3860b300d79a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.686629349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707976686608970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d28b038f-6ba0-4ab3-b37d-3860b300d79a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.687198330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6555d273-7070-4a9d-9015-62a610faf645 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.687249273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6555d273-7070-4a9d-9015-62a610faf645 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:19:36 kubernetes-upgrade-523797 crio[3208]: time="2025-04-03 19:19:36.687599185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75a8593d72be30d863bf1ec54b93ca46a4186f7a0f01dd9e8a0acd48169d754a,PodSandboxId:ecd782b1da6947a2e4102388f13203a6882f52faf2ea671fbf767b2385b8f987,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707973672478692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2gxwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43667b-2092-429c-b9ba-ad5a3186962a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c014f73bce55ce8c8b40c8230be6c7e082fb40190bfcc141c714c36e95bd353,PodSandboxId:7b633b1579a390762164bb2c447ba0a68684302587261fca8e5769d7f6bfa983,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743707973671252621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d3b3638a4285cf9f539b771ea5264e3b6f4d2703cc288058653e45b4dbcc19,PodSandboxId:42977c397eb49574fcbf06f5a171459bf7bdd482e147fa8c9be29c9793875fef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707973651194979,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mx6nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f1e78f562ef101f57a1ea47a733b2006c25609dfc278e24373c62b254aaa0c,PodSandboxId:5867ec7883569b113c04539cce14212696f64047a25c4841e6c3af9d20c284a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707973643488895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l72qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ab8c40-2dcd-4ad9-ae7c-88a
924660006,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73a90107e6d3a56878972de3e336ee9c09c0bdd3b6f398e419c6266e87b63d1,PodSandboxId:989d2e7ecd745c6d9e0397174c8e24d28c19ba0e3f529109d9b0b98a13f4f879,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707969813948867,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61777891b6b70bbff690ad7c2eebc9d7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da09ffa954ba395fb3bb187fd404bbfef6165bba4a31915604df245601b2b58e,PodSandboxId:2d002d1fe193c5deabb79a66cb54ae2ac671f81ddddb34ff53d8fce6677c0545,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707969828382326,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c4ba2af3ce6152481609090dbc607f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e43078d1fb6cbde979399d6e556fe1149c41404f3dc381bc8cbc5fbc945846,PodSandboxId:2ec98cff3f5b1b354324356599a2dea55db3d45cf4c5d9b93272d49dc4f1bcdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707969791665004,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a21a8ab61972e32d42d682d2635bc55,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6ee4f438ba15e4cb9de6eb60baf46a0a5d7ed8e90fc1801fd65e2c6fca7b42,PodSandboxId:ee84e7546b40567f38c71a2b380299bd9573bcd193bc8597b78c094da973a226,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707966694566463,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f871a1c495dae713034d9837eb01f54,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b1833edd2b4de6fa553f89cda5e12713a450a4d4edfa1b56c1d5019c9d7c54,PodSandboxId:5867ec7883569b113c04539cce14212696f64047a25c4841e6c3af9d20c284a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707957640751620,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l72qz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ab8c40-2dcd-4ad9-ae7c-88a924660006,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b06d77742191041339bf6f070547253a14e2dde3aad31aea01d0f3fb584297,PodSandboxId:a33b586ef857284936459da10e44cc2fe9f8e96b9546fe425b137d8a726c813b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707954851935468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2gxwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a43667b-2092-429c-b9ba-ad5a3186962a,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c003de8cd2f18c2b0cd1ce89a742350469eaa9bb8cf0299840b1a80f4fd173f2,PodSandboxId:913a20dd599f6e9bbcc6f7fc4946762cca29dfe2bb05e63290ed4f88971b9da8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707955023996198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mx6nt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6e30fb-5a21-4c81-9e7e-8d4f5e1a41c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3effc48c04a4ce18abe3e6cf1a20ee68785bc6ebd7a0615340fd8a4d1c10e54,PodSandboxId:51c1af6ee3ccae229efd3dc134bf446e765c0ad3d2c84d7a21be2e7c43ea1d94,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1743707954177200090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e803e42-8aa0-47f0-8cb2-4eb30f5f632b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6eefd71abbc334b4ed4dfe1a41d3c1e8f4227f6a62139521ae33768b926e3b,PodSandboxId:472c65e103e5a7fc483df4a522b4db73537e2a826487592834d0129e932a046b,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707954185985287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c4ba2af3ce6152481609090dbc607f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe8003f07d8aa5b67f399a929851aef229e3f3c3131d4533f5928fdc0f33402,PodSandboxId:6e876a57503a81e19deaa0362f7b1c50dc617e152cedd14f0cd29b2fc0845ef8,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707954169722715,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a21a8ab61972e32d42d682d2635bc55,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:432bbbd74d309fd94fa9f1c8b229ef9775e9c10c011d357e9529098e5581e79a,PodSandboxId:ad027959cbc39b45d837597757b9f95f0ae329f627ca1450cc014709f7539a18,Metadata:&ContainerMetadata{Name:etcd,Atte
mpt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707954091722599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61777891b6b70bbff690ad7c2eebc9d7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b253d372a18a1143f555e78805bb93b809d41f29794f0e14175ba82abb033c19,PodSandboxId:45865879984c4c0282b0788c3bc09042f9c7ecd039342affe2433e21282efb3f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707954045852298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-523797,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f871a1c495dae713034d9837eb01f54,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6555d273-7070-4a9d-9015-62a610faf645 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	75a8593d72be3       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   3 seconds ago       Running             kube-proxy                2                   ecd782b1da694       kube-proxy-2gxwk
	9c014f73bce55       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   7b633b1579a39       storage-provisioner
	d5d3b3638a428       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   42977c397eb49       coredns-668d6bf9bc-mx6nt
	d2f1e78f562ef       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   5867ec7883569       coredns-668d6bf9bc-l72qz
	da09ffa954ba3       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   6 seconds ago       Running             kube-apiserver            2                   2d002d1fe193c       kube-apiserver-kubernetes-upgrade-523797
	d73a90107e6d3       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   6 seconds ago       Running             etcd                      2                   989d2e7ecd745       etcd-kubernetes-upgrade-523797
	83e43078d1fb6       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   6 seconds ago       Running             kube-scheduler            2                   2ec98cff3f5b1       kube-scheduler-kubernetes-upgrade-523797
	dc6ee4f438ba1       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   10 seconds ago      Running             kube-controller-manager   2                   ee84e7546b405       kube-controller-manager-kubernetes-upgrade-523797
	50b1833edd2b4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 seconds ago      Exited              coredns                   1                   5867ec7883569       coredns-668d6bf9bc-l72qz
	c003de8cd2f18       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   21 seconds ago      Exited              coredns                   1                   913a20dd599f6       coredns-668d6bf9bc-mx6nt
	45b06d7774219       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   21 seconds ago      Exited              kube-proxy                1                   a33b586ef8572       kube-proxy-2gxwk
	4b6eefd71abbc       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   22 seconds ago      Exited              kube-apiserver            1                   472c65e103e5a       kube-apiserver-kubernetes-upgrade-523797
	e3effc48c04a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   22 seconds ago      Exited              storage-provisioner       2                   51c1af6ee3cca       storage-provisioner
	1fe8003f07d8a       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   22 seconds ago      Exited              kube-scheduler            1                   6e876a57503a8       kube-scheduler-kubernetes-upgrade-523797
	432bbbd74d309       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   22 seconds ago      Exited              etcd                      1                   ad027959cbc39       etcd-kubernetes-upgrade-523797
	b253d372a18a1       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   22 seconds ago      Exited              kube-controller-manager   1                   45865879984c4       kube-controller-manager-kubernetes-upgrade-523797
	
	
	==> coredns [50b1833edd2b4de6fa553f89cda5e12713a450a4d4edfa1b56c1d5019c9d7c54] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c003de8cd2f18c2b0cd1ce89a742350469eaa9bb8cf0299840b1a80f4fd173f2] <==
	
	
	==> coredns [d2f1e78f562ef101f57a1ea47a733b2006c25609dfc278e24373c62b254aaa0c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d5d3b3638a4285cf9f539b771ea5264e3b6f4d2703cc288058653e45b4dbcc19] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-523797
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-523797
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 03 Apr 2025 19:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-523797
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 03 Apr 2025 19:19:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 03 Apr 2025 19:19:32 +0000   Thu, 03 Apr 2025 19:17:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 03 Apr 2025 19:19:32 +0000   Thu, 03 Apr 2025 19:17:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 03 Apr 2025 19:19:32 +0000   Thu, 03 Apr 2025 19:17:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 03 Apr 2025 19:19:32 +0000   Thu, 03 Apr 2025 19:17:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    kubernetes-upgrade-523797
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f17cdfaa262a46cf82a43e8e1aa34aaa
	  System UUID:                f17cdfaa-262a-46cf-82a4-3e8e1aa34aaa
	  Boot ID:                    0ebdb59a-55fe-48e0-aee3-5ada9585bc87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-l72qz                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     93s
	  kube-system                 coredns-668d6bf9bc-mx6nt                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     93s
	  kube-system                 etcd-kubernetes-upgrade-523797                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         93s
	  kube-system                 kube-apiserver-kubernetes-upgrade-523797             250m (12%)    0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-523797    200m (10%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-proxy-2gxwk                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-kubernetes-upgrade-523797             100m (5%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2s                   kube-proxy       
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  104s (x8 over 105s)  kubelet          Node kubernetes-upgrade-523797 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     104s (x7 over 105s)  kubelet          Node kubernetes-upgrade-523797 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    104s (x8 over 105s)  kubelet          Node kubernetes-upgrade-523797 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           95s                  node-controller  Node kubernetes-upgrade-523797 event: Registered Node kubernetes-upgrade-523797 in Controller
	  Normal  Starting                 8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)      kubelet          Node kubernetes-upgrade-523797 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)      kubelet          Node kubernetes-upgrade-523797 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)      kubelet          Node kubernetes-upgrade-523797 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                   node-controller  Node kubernetes-upgrade-523797 event: Registered Node kubernetes-upgrade-523797 in Controller
	
	
	==> dmesg <==
	[  +2.352469] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.986089] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.066695] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069677] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.200826] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.115680] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.306056] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +4.269130] systemd-fstab-generator[718]: Ignoring "noauto" option for root device
	[  +0.081944] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.783300] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[Apr 3 19:18] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.093810] kauditd_printk_skb: 97 callbacks suppressed
	[ +31.707217] kauditd_printk_skb: 103 callbacks suppressed
	[Apr 3 19:19] systemd-fstab-generator[2279]: Ignoring "noauto" option for root device
	[  +0.179249] systemd-fstab-generator[2291]: Ignoring "noauto" option for root device
	[  +0.243713] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	[  +0.322483] systemd-fstab-generator[2470]: Ignoring "noauto" option for root device
	[  +1.243162] systemd-fstab-generator[3049]: Ignoring "noauto" option for root device
	[  +1.335302] systemd-fstab-generator[3404]: Ignoring "noauto" option for root device
	[ +10.369490] kauditd_printk_skb: 302 callbacks suppressed
	[  +2.365094] systemd-fstab-generator[4168]: Ignoring "noauto" option for root device
	[  +4.618551] kauditd_printk_skb: 42 callbacks suppressed
	[  +1.085776] systemd-fstab-generator[4696]: Ignoring "noauto" option for root device
	
	
	==> etcd [432bbbd74d309fd94fa9f1c8b229ef9775e9c10c011d357e9529098e5581e79a] <==
	{"level":"info","ts":"2025-04-03T19:19:14.695409Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-04-03T19:19:14.746289Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","commit-index":448}
	{"level":"info","ts":"2025-04-03T19:19:14.746473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af switched to configuration voters=()"}
	{"level":"info","ts":"2025-04-03T19:19:14.746561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became follower at term 2"}
	{"level":"info","ts":"2025-04-03T19:19:14.746580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f0ef8018a32f46af [peers: [], term: 2, commit: 448, applied: 0, lastindex: 448, lastterm: 2]"}
	{"level":"warn","ts":"2025-04-03T19:19:14.755641Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-04-03T19:19:14.773051Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":427}
	{"level":"info","ts":"2025-04-03T19:19:14.803096Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-04-03T19:19:14.809082Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"f0ef8018a32f46af","timeout":"7s"}
	{"level":"info","ts":"2025-04-03T19:19:14.809383Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"f0ef8018a32f46af"}
	{"level":"info","ts":"2025-04-03T19:19:14.809426Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"f0ef8018a32f46af","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-03T19:19:14.809938Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-03T19:19:14.812443Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-03T19:19:14.812598Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-03T19:19:14.812639Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-03T19:19:14.812648Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-03T19:19:14.812883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af switched to configuration voters=(17361235931841906351)"}
	{"level":"info","ts":"2025-04-03T19:19:14.812947Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","added-peer-id":"f0ef8018a32f46af","added-peer-peer-urls":["https://192.168.39.159:2380"]}
	{"level":"info","ts":"2025-04-03T19:19:14.813038Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-03T19:19:14.813072Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-03T19:19:14.823740Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-03T19:19:14.824068Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"f0ef8018a32f46af","initial-advertise-peer-urls":["https://192.168.39.159:2380"],"listen-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.159:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-03T19:19:14.828008Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-03T19:19:14.828185Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2025-04-03T19:19:14.828219Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.159:2380"}
	
	
	==> etcd [d73a90107e6d3a56878972de3e336ee9c09c0bdd3b6f398e419c6266e87b63d1] <==
	{"level":"info","ts":"2025-04-03T19:19:30.125635Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","added-peer-id":"f0ef8018a32f46af","added-peer-peer-urls":["https://192.168.39.159:2380"]}
	{"level":"info","ts":"2025-04-03T19:19:30.125729Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-03T19:19:30.125749Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-03T19:19:30.129500Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-03T19:19:30.131246Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-03T19:19:30.133149Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2025-04-03T19:19:30.134146Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2025-04-03T19:19:30.134877Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"f0ef8018a32f46af","initial-advertise-peer-urls":["https://192.168.39.159:2380"],"listen-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.159:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-03T19:19:30.136159Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-03T19:19:31.695922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-03T19:19:31.695959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-03T19:19:31.695992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgPreVoteResp from f0ef8018a32f46af at term 2"}
	{"level":"info","ts":"2025-04-03T19:19:31.696015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became candidate at term 3"}
	{"level":"info","ts":"2025-04-03T19:19:31.696023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgVoteResp from f0ef8018a32f46af at term 3"}
	{"level":"info","ts":"2025-04-03T19:19:31.696031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became leader at term 3"}
	{"level":"info","ts":"2025-04-03T19:19:31.696038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0ef8018a32f46af elected leader f0ef8018a32f46af at term 3"}
	{"level":"info","ts":"2025-04-03T19:19:31.702451Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f0ef8018a32f46af","local-member-attributes":"{Name:kubernetes-upgrade-523797 ClientURLs:[https://192.168.39.159:2379]}","request-path":"/0/members/f0ef8018a32f46af/attributes","cluster-id":"bc02953927cca850","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-03T19:19:31.702661Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-03T19:19:31.702755Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-03T19:19:31.702778Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-03T19:19:31.702880Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-03T19:19:31.703468Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-03T19:19:31.703472Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-03T19:19:31.704171Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.159:2379"}
	{"level":"info","ts":"2025-04-03T19:19:31.704205Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:19:37 up 2 min,  0 users,  load average: 2.26, 0.65, 0.22
	Linux kubernetes-upgrade-523797 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4b6eefd71abbc334b4ed4dfe1a41d3c1e8f4227f6a62139521ae33768b926e3b] <==
	W0403 19:19:15.151809       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0403 19:19:15.153637       1 options.go:238] external host was not specified, using 192.168.39.159
	I0403 19:19:15.159790       1 server.go:143] Version: v1.32.2
	I0403 19:19:15.161204       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [da09ffa954ba395fb3bb187fd404bbfef6165bba4a31915604df245601b2b58e] <==
	I0403 19:19:32.765178       1 autoregister_controller.go:144] Starting autoregister controller
	I0403 19:19:32.765210       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0403 19:19:32.819050       1 shared_informer.go:320] Caches are synced for configmaps
	I0403 19:19:32.819181       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0403 19:19:32.819486       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0403 19:19:32.819509       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0403 19:19:32.819989       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0403 19:19:32.820061       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0403 19:19:32.821098       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0403 19:19:32.828945       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0403 19:19:32.845602       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0403 19:19:32.871670       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0403 19:19:32.881297       1 cache.go:39] Caches are synced for autoregister controller
	I0403 19:19:32.882580       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0403 19:19:32.882606       1 policy_source.go:240] refreshing policies
	I0403 19:19:32.889842       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0403 19:19:33.395721       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0403 19:19:33.744196       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0403 19:19:34.555462       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0403 19:19:34.608698       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0403 19:19:34.648681       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0403 19:19:34.654616       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0403 19:19:36.303755       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0403 19:19:36.503025       1 controller.go:615] quota admission added evaluator for: endpoints
	I0403 19:19:36.704264       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b253d372a18a1143f555e78805bb93b809d41f29794f0e14175ba82abb033c19] <==
	
	
	==> kube-controller-manager [dc6ee4f438ba15e4cb9de6eb60baf46a0a5d7ed8e90fc1801fd65e2c6fca7b42] <==
	I0403 19:19:36.320526       1 shared_informer.go:320] Caches are synced for resource quota
	I0403 19:19:36.320711       1 shared_informer.go:320] Caches are synced for crt configmap
	I0403 19:19:36.324836       1 shared_informer.go:320] Caches are synced for ephemeral
	I0403 19:19:36.327301       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0403 19:19:36.338086       1 shared_informer.go:320] Caches are synced for HPA
	I0403 19:19:36.338367       1 shared_informer.go:320] Caches are synced for garbage collector
	I0403 19:19:36.340345       1 shared_informer.go:320] Caches are synced for expand
	I0403 19:19:36.343616       1 shared_informer.go:320] Caches are synced for PVC protection
	I0403 19:19:36.347763       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0403 19:19:36.347840       1 shared_informer.go:320] Caches are synced for daemon sets
	I0403 19:19:36.347871       1 shared_informer.go:320] Caches are synced for taint
	I0403 19:19:36.348070       1 shared_informer.go:320] Caches are synced for cronjob
	I0403 19:19:36.349006       1 shared_informer.go:320] Caches are synced for disruption
	I0403 19:19:36.349512       1 shared_informer.go:320] Caches are synced for service account
	I0403 19:19:36.350310       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0403 19:19:36.351235       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-523797"
	I0403 19:19:36.351275       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0403 19:19:36.351373       1 shared_informer.go:320] Caches are synced for garbage collector
	I0403 19:19:36.351400       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0403 19:19:36.351408       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0403 19:19:36.357691       1 shared_informer.go:320] Caches are synced for PV protection
	I0403 19:19:36.366273       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0403 19:19:36.366307       1 shared_informer.go:320] Caches are synced for persistent volume
	I0403 19:19:36.709626       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="361.410934ms"
	I0403 19:19:36.710787       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="78.121µs"
	
	
	==> kube-proxy [45b06d77742191041339bf6f070547253a14e2dde3aad31aea01d0f3fb584297] <==
	
	
	==> kube-proxy [75a8593d72be30d863bf1ec54b93ca46a4186f7a0f01dd9e8a0acd48169d754a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0403 19:19:34.022290       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0403 19:19:34.034659       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.159"]
	E0403 19:19:34.034739       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0403 19:19:34.080280       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0403 19:19:34.080738       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0403 19:19:34.080894       1 server_linux.go:170] "Using iptables Proxier"
	I0403 19:19:34.085390       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0403 19:19:34.085653       1 server.go:497] "Version info" version="v1.32.2"
	I0403 19:19:34.085678       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0403 19:19:34.087852       1 config.go:199] "Starting service config controller"
	I0403 19:19:34.087891       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0403 19:19:34.087914       1 config.go:105] "Starting endpoint slice config controller"
	I0403 19:19:34.087918       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0403 19:19:34.088310       1 config.go:329] "Starting node config controller"
	I0403 19:19:34.088334       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0403 19:19:34.188616       1 shared_informer.go:320] Caches are synced for node config
	I0403 19:19:34.188692       1 shared_informer.go:320] Caches are synced for service config
	I0403 19:19:34.188707       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1fe8003f07d8aa5b67f399a929851aef229e3f3c3131d4533f5928fdc0f33402] <==
	
	
	==> kube-scheduler [83e43078d1fb6cbde979399d6e556fe1149c41404f3dc381bc8cbc5fbc945846] <==
	I0403 19:19:30.483774       1 serving.go:386] Generated self-signed cert in-memory
	W0403 19:19:32.786653       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0403 19:19:32.787968       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0403 19:19:32.788078       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0403 19:19:32.788157       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0403 19:19:32.808909       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0403 19:19:32.809000       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0403 19:19:32.810884       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0403 19:19:32.810922       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0403 19:19:32.811287       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0403 19:19:32.811414       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0403 19:19:32.911895       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: E0403 19:19:32.570284    4175 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-523797\" not found" node="kubernetes-upgrade-523797"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: E0403 19:19:32.571248    4175 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-523797\" not found" node="kubernetes-upgrade-523797"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:32.841023    4175 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-523797"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: E0403 19:19:32.961479    4175 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-kubernetes-upgrade-523797\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-523797"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:32.961644    4175 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-kubernetes-upgrade-523797"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: E0403 19:19:32.969232    4175 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-kubernetes-upgrade-523797\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-523797"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:32.969387    4175 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-kubernetes-upgrade-523797"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: E0403 19:19:32.989902    4175 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-kubernetes-upgrade-523797\" already exists" pod="kube-system/etcd-kubernetes-upgrade-523797"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:32.990098    4175 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-523797"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:32.995307    4175 kubelet_node_status.go:125] "Node was previously registered" node="kubernetes-upgrade-523797"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:32.995583    4175 kubelet_node_status.go:79] "Successfully registered node" node="kubernetes-upgrade-523797"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:32.995683    4175 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 03 19:19:32 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:32.996797    4175 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: E0403 19:19:33.023327    4175 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-kubernetes-upgrade-523797\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-523797"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:33.308758    4175 apiserver.go:52] "Watching apiserver"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:33.330444    4175 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:33.386535    4175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e803e42-8aa0-47f0-8cb2-4eb30f5f632b-tmp\") pod \"storage-provisioner\" (UID: \"1e803e42-8aa0-47f0-8cb2-4eb30f5f632b\") " pod="kube-system/storage-provisioner"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:33.386683    4175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a43667b-2092-429c-b9ba-ad5a3186962a-xtables-lock\") pod \"kube-proxy-2gxwk\" (UID: \"2a43667b-2092-429c-b9ba-ad5a3186962a\") " pod="kube-system/kube-proxy-2gxwk"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:33.386745    4175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a43667b-2092-429c-b9ba-ad5a3186962a-lib-modules\") pod \"kube-proxy-2gxwk\" (UID: \"2a43667b-2092-429c-b9ba-ad5a3186962a\") " pod="kube-system/kube-proxy-2gxwk"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:33.570478    4175 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-kubernetes-upgrade-523797"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: E0403 19:19:33.578910    4175 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-kubernetes-upgrade-523797\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-523797"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:33.612655    4175 scope.go:117] "RemoveContainer" containerID="45b06d77742191041339bf6f070547253a14e2dde3aad31aea01d0f3fb584297"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:33.612869    4175 scope.go:117] "RemoveContainer" containerID="c003de8cd2f18c2b0cd1ce89a742350469eaa9bb8cf0299840b1a80f4fd173f2"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:33.613040    4175 scope.go:117] "RemoveContainer" containerID="50b1833edd2b4de6fa553f89cda5e12713a450a4d4edfa1b56c1d5019c9d7c54"
	Apr 03 19:19:33 kubernetes-upgrade-523797 kubelet[4175]: I0403 19:19:33.613774    4175 scope.go:117] "RemoveContainer" containerID="e3effc48c04a4ce18abe3e6cf1a20ee68785bc6ebd7a0615340fd8a4d1c10e54"
	
	
	==> storage-provisioner [9c014f73bce55ce8c8b40c8230be6c7e082fb40190bfcc141c714c36e95bd353] <==
	I0403 19:19:33.837394       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0403 19:19:33.865760       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0403 19:19:33.867182       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [e3effc48c04a4ce18abe3e6cf1a20ee68785bc6ebd7a0615340fd8a4d1c10e54] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-523797 -n kubernetes-upgrade-523797
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-523797 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-523797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-523797
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-523797: (1.170894318s)
--- FAIL: TestKubernetesUpgrade (407.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (86.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-942912 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-942912 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m19.14228766s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-942912] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-942912" primary control-plane node in "pause-942912" cluster
	* Updating the running kvm2 "pause-942912" VM ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-942912" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 19:16:01.307645   57537 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:16:01.308935   57537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:16:01.308952   57537 out.go:358] Setting ErrFile to fd 2...
	I0403 19:16:01.308959   57537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:16:01.309559   57537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:16:01.310250   57537 out.go:352] Setting JSON to false
	I0403 19:16:01.311245   57537 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7106,"bootTime":1743700655,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:16:01.311343   57537 start.go:139] virtualization: kvm guest
	I0403 19:16:01.313159   57537 out.go:177] * [pause-942912] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:16:01.314636   57537 notify.go:220] Checking for updates...
	I0403 19:16:01.314646   57537 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:16:01.315871   57537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:16:01.317119   57537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:16:01.318312   57537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:16:01.319471   57537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:16:01.320743   57537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:16:01.322487   57537 config.go:182] Loaded profile config "pause-942912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:16:01.322913   57537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:16:01.323002   57537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:16:01.337693   57537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0403 19:16:01.338208   57537 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:16:01.338719   57537 main.go:141] libmachine: Using API Version  1
	I0403 19:16:01.338738   57537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:16:01.339106   57537 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:16:01.339277   57537 main.go:141] libmachine: (pause-942912) Calling .DriverName
	I0403 19:16:01.339510   57537 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:16:01.339793   57537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:16:01.339825   57537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:16:01.354596   57537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0403 19:16:01.355109   57537 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:16:01.355542   57537 main.go:141] libmachine: Using API Version  1
	I0403 19:16:01.355563   57537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:16:01.355942   57537 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:16:01.356124   57537 main.go:141] libmachine: (pause-942912) Calling .DriverName
	I0403 19:16:01.630594   57537 out.go:177] * Using the kvm2 driver based on existing profile
	I0403 19:16:01.631854   57537 start.go:297] selected driver: kvm2
	I0403 19:16:01.631870   57537 start.go:901] validating driver "kvm2" against &{Name:pause-942912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pa
use-942912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.237 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:16:01.632043   57537 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:16:01.632473   57537 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:16:01.632565   57537 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 19:16:01.647896   57537 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0403 19:16:01.648955   57537 cni.go:84] Creating CNI manager for ""
	I0403 19:16:01.649028   57537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:16:01.649105   57537 start.go:340] cluster config:
	{Name:pause-942912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-942912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.237 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:16:01.649285   57537 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:16:01.651363   57537 out.go:177] * Starting "pause-942912" primary control-plane node in "pause-942912" cluster
	I0403 19:16:01.652596   57537 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 19:16:01.652637   57537 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0403 19:16:01.652649   57537 cache.go:56] Caching tarball of preloaded images
	I0403 19:16:01.652739   57537 preload.go:172] Found /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0403 19:16:01.652751   57537 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0403 19:16:01.652899   57537 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/config.json ...
	I0403 19:16:01.653159   57537 start.go:360] acquireMachinesLock for pause-942912: {Name:mk8972215f0ab94ca7966bf5adf18262e19bccd0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0403 19:16:23.883545   57537 start.go:364] duration metric: took 22.230335846s to acquireMachinesLock for "pause-942912"
	I0403 19:16:23.883602   57537 start.go:96] Skipping create...Using existing machine configuration
	I0403 19:16:23.883610   57537 fix.go:54] fixHost starting: 
	I0403 19:16:23.884146   57537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:16:23.884220   57537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:16:23.903978   57537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37619
	I0403 19:16:23.904513   57537 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:16:23.905009   57537 main.go:141] libmachine: Using API Version  1
	I0403 19:16:23.905030   57537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:16:23.905336   57537 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:16:23.905515   57537 main.go:141] libmachine: (pause-942912) Calling .DriverName
	I0403 19:16:23.905658   57537 main.go:141] libmachine: (pause-942912) Calling .GetState
	I0403 19:16:23.907316   57537 fix.go:112] recreateIfNeeded on pause-942912: state=Running err=<nil>
	W0403 19:16:23.907338   57537 fix.go:138] unexpected machine state, will restart: <nil>
	I0403 19:16:23.909293   57537 out.go:177] * Updating the running kvm2 "pause-942912" VM ...
	I0403 19:16:23.910468   57537 machine.go:93] provisionDockerMachine start ...
	I0403 19:16:23.910493   57537 main.go:141] libmachine: (pause-942912) Calling .DriverName
	I0403 19:16:23.910714   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHHostname
	I0403 19:16:23.913154   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:23.913626   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:23.913656   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:23.913780   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHPort
	I0403 19:16:23.913951   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:23.914114   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:23.914252   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHUsername
	I0403 19:16:23.914389   57537 main.go:141] libmachine: Using SSH client type: native
	I0403 19:16:23.914598   57537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.237 22 <nil> <nil>}
	I0403 19:16:23.914608   57537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0403 19:16:24.024358   57537 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-942912
	
	I0403 19:16:24.024388   57537 main.go:141] libmachine: (pause-942912) Calling .GetMachineName
	I0403 19:16:24.024616   57537 buildroot.go:166] provisioning hostname "pause-942912"
	I0403 19:16:24.024641   57537 main.go:141] libmachine: (pause-942912) Calling .GetMachineName
	I0403 19:16:24.024899   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHHostname
	I0403 19:16:24.027888   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.028327   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:24.028369   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.028493   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHPort
	I0403 19:16:24.028671   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:24.028826   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:24.028942   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHUsername
	I0403 19:16:24.029104   57537 main.go:141] libmachine: Using SSH client type: native
	I0403 19:16:24.029344   57537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.237 22 <nil> <nil>}
	I0403 19:16:24.029355   57537 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-942912 && echo "pause-942912" | sudo tee /etc/hostname
	I0403 19:16:24.150194   57537 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-942912
	
	I0403 19:16:24.150226   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHHostname
	I0403 19:16:24.152812   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.153156   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:24.153197   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.153388   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHPort
	I0403 19:16:24.153549   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:24.153679   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:24.153793   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHUsername
	I0403 19:16:24.153933   57537 main.go:141] libmachine: Using SSH client type: native
	I0403 19:16:24.154195   57537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.237 22 <nil> <nil>}
	I0403 19:16:24.154218   57537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-942912' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-942912/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-942912' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0403 19:16:24.267729   57537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:16:24.267759   57537 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20591-14371/.minikube CaCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20591-14371/.minikube}
	I0403 19:16:24.267785   57537 buildroot.go:174] setting up certificates
	I0403 19:16:24.267794   57537 provision.go:84] configureAuth start
	I0403 19:16:24.267803   57537 main.go:141] libmachine: (pause-942912) Calling .GetMachineName
	I0403 19:16:24.268081   57537 main.go:141] libmachine: (pause-942912) Calling .GetIP
	I0403 19:16:24.270813   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.271191   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:24.271213   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.271368   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHHostname
	I0403 19:16:24.273467   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.273739   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:24.273762   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.273902   57537 provision.go:143] copyHostCerts
	I0403 19:16:24.273976   57537 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem, removing ...
	I0403 19:16:24.273996   57537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem
	I0403 19:16:24.274067   57537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem (1082 bytes)
	I0403 19:16:24.274200   57537 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem, removing ...
	I0403 19:16:24.274212   57537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem
	I0403 19:16:24.274243   57537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem (1123 bytes)
	I0403 19:16:24.274350   57537 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem, removing ...
	I0403 19:16:24.274360   57537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem
	I0403 19:16:24.274389   57537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem (1675 bytes)
	I0403 19:16:24.274456   57537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem org=jenkins.pause-942912 san=[127.0.0.1 192.168.50.237 localhost minikube pause-942912]
	I0403 19:16:24.404003   57537 provision.go:177] copyRemoteCerts
	I0403 19:16:24.404053   57537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0403 19:16:24.404075   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHHostname
	I0403 19:16:24.406796   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.407257   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:24.407290   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.407462   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHPort
	I0403 19:16:24.407641   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:24.407773   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHUsername
	I0403 19:16:24.407906   57537 sshutil.go:53] new ssh client: &{IP:192.168.50.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/pause-942912/id_rsa Username:docker}
	I0403 19:16:24.496732   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0403 19:16:24.520932   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0403 19:16:24.544650   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0403 19:16:24.571813   57537 provision.go:87] duration metric: took 304.005167ms to configureAuth
	I0403 19:16:24.571843   57537 buildroot.go:189] setting minikube options for container-runtime
	I0403 19:16:24.572072   57537 config.go:182] Loaded profile config "pause-942912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:16:24.572158   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHHostname
	I0403 19:16:24.575090   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.575525   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:24.575552   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:24.575726   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHPort
	I0403 19:16:24.575915   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:24.576068   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:24.576172   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHUsername
	I0403 19:16:24.576314   57537 main.go:141] libmachine: Using SSH client type: native
	I0403 19:16:24.576581   57537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.237 22 <nil> <nil>}
	I0403 19:16:24.576603   57537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0403 19:16:30.219657   57537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0403 19:16:30.219711   57537 machine.go:96] duration metric: took 6.309202142s to provisionDockerMachine
	I0403 19:16:30.219730   57537 start.go:293] postStartSetup for "pause-942912" (driver="kvm2")
	I0403 19:16:30.219746   57537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0403 19:16:30.219771   57537 main.go:141] libmachine: (pause-942912) Calling .DriverName
	I0403 19:16:30.220146   57537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0403 19:16:30.220182   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHHostname
	I0403 19:16:30.223739   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:30.224259   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:30.224295   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:30.224441   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHPort
	I0403 19:16:30.224658   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:30.224883   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHUsername
	I0403 19:16:30.225078   57537 sshutil.go:53] new ssh client: &{IP:192.168.50.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/pause-942912/id_rsa Username:docker}
	I0403 19:16:30.326075   57537 ssh_runner.go:195] Run: cat /etc/os-release
	I0403 19:16:30.331062   57537 info.go:137] Remote host: Buildroot 2023.02.9
	I0403 19:16:30.331090   57537 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/addons for local assets ...
	I0403 19:16:30.331155   57537 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/files for local assets ...
	I0403 19:16:30.331248   57537 filesync.go:149] local asset: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem -> 215522.pem in /etc/ssl/certs
	I0403 19:16:30.331364   57537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0403 19:16:30.344617   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:16:30.378088   57537 start.go:296] duration metric: took 158.339782ms for postStartSetup
	I0403 19:16:30.378137   57537 fix.go:56] duration metric: took 6.494527073s for fixHost
	I0403 19:16:30.378162   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHHostname
	I0403 19:16:30.381635   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:30.382084   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:30.382117   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:30.382481   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHPort
	I0403 19:16:30.382710   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:30.382932   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:30.383086   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHUsername
	I0403 19:16:30.383334   57537 main.go:141] libmachine: Using SSH client type: native
	I0403 19:16:30.383527   57537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.237 22 <nil> <nil>}
	I0403 19:16:30.383536   57537 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0403 19:16:30.500955   57537 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743707790.490649077
	
	I0403 19:16:30.500984   57537 fix.go:216] guest clock: 1743707790.490649077
	I0403 19:16:30.500995   57537 fix.go:229] Guest: 2025-04-03 19:16:30.490649077 +0000 UTC Remote: 2025-04-03 19:16:30.37814341 +0000 UTC m=+29.114990924 (delta=112.505667ms)
	I0403 19:16:30.501020   57537 fix.go:200] guest clock delta is within tolerance: 112.505667ms
	I0403 19:16:30.501027   57537 start.go:83] releasing machines lock for "pause-942912", held for 6.617446539s
	I0403 19:16:30.501060   57537 main.go:141] libmachine: (pause-942912) Calling .DriverName
	I0403 19:16:30.501398   57537 main.go:141] libmachine: (pause-942912) Calling .GetIP
	I0403 19:16:30.505420   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:30.505808   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:30.505830   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:30.506038   57537 main.go:141] libmachine: (pause-942912) Calling .DriverName
	I0403 19:16:30.506634   57537 main.go:141] libmachine: (pause-942912) Calling .DriverName
	I0403 19:16:30.506815   57537 main.go:141] libmachine: (pause-942912) Calling .DriverName
	I0403 19:16:30.506922   57537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0403 19:16:30.506992   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHHostname
	I0403 19:16:30.507072   57537 ssh_runner.go:195] Run: cat /version.json
	I0403 19:16:30.507100   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHHostname
	I0403 19:16:30.512019   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:30.512138   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:30.512562   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:30.512601   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:30.512723   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:30.512792   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:30.512941   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHPort
	I0403 19:16:30.513155   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:30.513161   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHPort
	I0403 19:16:30.513403   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHKeyPath
	I0403 19:16:30.513421   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHUsername
	I0403 19:16:30.513621   57537 main.go:141] libmachine: (pause-942912) Calling .GetSSHUsername
	I0403 19:16:30.513614   57537 sshutil.go:53] new ssh client: &{IP:192.168.50.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/pause-942912/id_rsa Username:docker}
	I0403 19:16:30.513778   57537 sshutil.go:53] new ssh client: &{IP:192.168.50.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/pause-942912/id_rsa Username:docker}
	I0403 19:16:30.632682   57537 ssh_runner.go:195] Run: systemctl --version
	I0403 19:16:30.641228   57537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0403 19:16:30.804725   57537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0403 19:16:30.813709   57537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0403 19:16:30.813781   57537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0403 19:16:30.827552   57537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0403 19:16:30.827592   57537 start.go:495] detecting cgroup driver to use...
	I0403 19:16:30.827674   57537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0403 19:16:30.850379   57537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0403 19:16:30.877440   57537 docker.go:217] disabling cri-docker service (if available) ...
	I0403 19:16:30.877512   57537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0403 19:16:30.899956   57537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0403 19:16:30.919892   57537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0403 19:16:31.114242   57537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0403 19:16:31.267722   57537 docker.go:233] disabling docker service ...
	I0403 19:16:31.267899   57537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0403 19:16:31.294031   57537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0403 19:16:31.316234   57537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0403 19:16:31.473490   57537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0403 19:16:31.618339   57537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0403 19:16:31.633975   57537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0403 19:16:31.652405   57537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0403 19:16:31.652480   57537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:16:31.662652   57537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0403 19:16:31.662738   57537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:16:31.672744   57537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:16:31.683172   57537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:16:31.694216   57537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0403 19:16:31.709098   57537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:16:31.726585   57537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:16:31.740136   57537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:16:31.754137   57537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0403 19:16:31.767373   57537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0403 19:16:31.780624   57537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:16:31.932626   57537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0403 19:16:32.175008   57537 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0403 19:16:32.175069   57537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0403 19:16:32.181283   57537 start.go:563] Will wait 60s for crictl version
	I0403 19:16:32.181339   57537 ssh_runner.go:195] Run: which crictl
	I0403 19:16:32.185140   57537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0403 19:16:32.220531   57537 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0403 19:16:32.220611   57537 ssh_runner.go:195] Run: crio --version
	I0403 19:16:32.256482   57537 ssh_runner.go:195] Run: crio --version
	I0403 19:16:32.300555   57537 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0403 19:16:32.301744   57537 main.go:141] libmachine: (pause-942912) Calling .GetIP
	I0403 19:16:32.747135   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:32.747625   57537 main.go:141] libmachine: (pause-942912) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:4e:b0", ip: ""} in network mk-pause-942912: {Iface:virbr2 ExpiryTime:2025-04-03 20:15:21 +0000 UTC Type:0 Mac:52:54:00:c7:4e:b0 Iaid: IPaddr:192.168.50.237 Prefix:24 Hostname:pause-942912 Clientid:01:52:54:00:c7:4e:b0}
	I0403 19:16:32.747654   57537 main.go:141] libmachine: (pause-942912) DBG | domain pause-942912 has defined IP address 192.168.50.237 and MAC address 52:54:00:c7:4e:b0 in network mk-pause-942912
	I0403 19:16:32.747844   57537 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0403 19:16:32.769360   57537 kubeadm.go:883] updating cluster {Name:pause-942912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-942912 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.237 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0403 19:16:32.769551   57537 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 19:16:32.769612   57537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:16:32.949399   57537 crio.go:514] all images are preloaded for cri-o runtime.
	I0403 19:16:32.949432   57537 crio.go:433] Images already preloaded, skipping extraction
	I0403 19:16:32.949497   57537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:16:33.105106   57537 crio.go:514] all images are preloaded for cri-o runtime.
	I0403 19:16:33.105144   57537 cache_images.go:84] Images are preloaded, skipping loading
	I0403 19:16:33.105155   57537 kubeadm.go:934] updating node { 192.168.50.237 8443 v1.32.2 crio true true} ...
	I0403 19:16:33.105287   57537 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-942912 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-942912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0403 19:16:33.105368   57537 ssh_runner.go:195] Run: crio config
	I0403 19:16:33.231634   57537 cni.go:84] Creating CNI manager for ""
	I0403 19:16:33.231662   57537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:16:33.231677   57537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0403 19:16:33.231705   57537 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.237 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-942912 NodeName:pause-942912 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0403 19:16:33.231859   57537 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-942912"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.237"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.237"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0403 19:16:33.231950   57537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0403 19:16:33.262322   57537 binaries.go:44] Found k8s binaries, skipping transfer
	I0403 19:16:33.262384   57537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0403 19:16:33.324509   57537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0403 19:16:33.407856   57537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0403 19:16:33.509250   57537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0403 19:16:33.532672   57537 ssh_runner.go:195] Run: grep 192.168.50.237	control-plane.minikube.internal$ /etc/hosts
	I0403 19:16:33.539682   57537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:16:33.750324   57537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:16:33.776109   57537 certs.go:68] Setting up /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912 for IP: 192.168.50.237
	I0403 19:16:33.776130   57537 certs.go:194] generating shared ca certs ...
	I0403 19:16:33.776148   57537 certs.go:226] acquiring lock for ca certs: {Name:mk34f09724355b828d31eb1ee97771fd3b4645c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:16:33.776322   57537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key
	I0403 19:16:33.776373   57537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key
	I0403 19:16:33.776384   57537 certs.go:256] generating profile certs ...
	I0403 19:16:33.776484   57537 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/client.key
	I0403 19:16:33.776560   57537 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/apiserver.key.08fd661b
	I0403 19:16:33.776616   57537 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/proxy-client.key
	I0403 19:16:33.776763   57537 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem (1338 bytes)
	W0403 19:16:33.776803   57537 certs.go:480] ignoring /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552_empty.pem, impossibly tiny 0 bytes
	I0403 19:16:33.776823   57537 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem (1675 bytes)
	I0403 19:16:33.776860   57537 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem (1082 bytes)
	I0403 19:16:33.776903   57537 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem (1123 bytes)
	I0403 19:16:33.776943   57537 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem (1675 bytes)
	I0403 19:16:33.776995   57537 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:16:33.777823   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0403 19:16:33.810635   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0403 19:16:33.876229   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0403 19:16:33.907966   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0403 19:16:33.936578   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0403 19:16:33.966651   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0403 19:16:34.002291   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0403 19:16:34.046244   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0403 19:16:34.086073   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /usr/share/ca-certificates/215522.pem (1708 bytes)
	I0403 19:16:34.127444   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0403 19:16:34.155757   57537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem --> /usr/share/ca-certificates/21552.pem (1338 bytes)
	I0403 19:16:34.202410   57537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0403 19:16:34.223467   57537 ssh_runner.go:195] Run: openssl version
	I0403 19:16:34.233553   57537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215522.pem && ln -fs /usr/share/ca-certificates/215522.pem /etc/ssl/certs/215522.pem"
	I0403 19:16:34.248022   57537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215522.pem
	I0403 19:16:34.253589   57537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  3 18:20 /usr/share/ca-certificates/215522.pem
	I0403 19:16:34.253642   57537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215522.pem
	I0403 19:16:34.260289   57537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215522.pem /etc/ssl/certs/3ec20f2e.0"
	I0403 19:16:34.276489   57537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0403 19:16:34.296025   57537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:16:34.305115   57537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  3 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:16:34.305197   57537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:16:34.314606   57537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0403 19:16:34.327892   57537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21552.pem && ln -fs /usr/share/ca-certificates/21552.pem /etc/ssl/certs/21552.pem"
	I0403 19:16:34.344135   57537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21552.pem
	I0403 19:16:34.349939   57537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  3 18:20 /usr/share/ca-certificates/21552.pem
	I0403 19:16:34.349998   57537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21552.pem
	I0403 19:16:34.356707   57537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21552.pem /etc/ssl/certs/51391683.0"
	I0403 19:16:34.371645   57537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0403 19:16:34.378529   57537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0403 19:16:34.388872   57537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0403 19:16:34.397856   57537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0403 19:16:34.406263   57537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0403 19:16:34.412760   57537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0403 19:16:34.421588   57537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0403 19:16:34.431122   57537 kubeadm.go:392] StartCluster: {Name:pause-942912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-942912 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.237 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:16:34.431319   57537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0403 19:16:34.431407   57537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0403 19:16:34.522470   57537 cri.go:89] found id: "c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224"
	I0403 19:16:34.522494   57537 cri.go:89] found id: "5b57989f55ed3f0edbdb7d30bf887d260ac751cb23c487dd7438a22192bfe44e"
	I0403 19:16:34.522500   57537 cri.go:89] found id: "aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39"
	I0403 19:16:34.522505   57537 cri.go:89] found id: "9a003c2044eb8d365c73424c3190d1a88beda7e1cec88f4a53f2bd5a001d5bcf"
	I0403 19:16:34.522508   57537 cri.go:89] found id: "8d56aa312e1b69a3a716da6b6f4a9b5192a9cb2a4a2e3a675574c4870c1d80f6"
	I0403 19:16:34.522513   57537 cri.go:89] found id: "dcd3a9ff5b410375e037e5a18f476cdb7748487391a9f890d505541d84f81319"
	I0403 19:16:34.522538   57537 cri.go:89] found id: "89e0c1f9d4f93269b7581f6f52a6ad640ba79ac6b1d1b1a4ec777f07fdb94d5a"
	I0403 19:16:34.522544   57537 cri.go:89] found id: "9390256c03439a09e5a45a30b93d2ffce84f5e0016968409817e5b0e07b02a30"
	I0403 19:16:34.522549   57537 cri.go:89] found id: "626fd3e77f6360784170dcaf5fb9b1a5c9c32935762af290e27d7b0aee963dd8"
	I0403 19:16:34.522557   57537 cri.go:89] found id: "965293767f23dfe005e6968844d61a205318896c37e8f8fc68e1253c9a856d9c"
	I0403 19:16:34.522568   57537 cri.go:89] found id: "0e435520f8914b4b8e407260a0b8e35c498b61e00fd86c3464c19a26d6e7d876"
	I0403 19:16:34.522572   57537 cri.go:89] found id: "d35c5225e443ca824d6cd3421f62ec248691e840177879ad49785c154e42eadc"
	I0403 19:16:34.522578   57537 cri.go:89] found id: ""
	I0403 19:16:34.522627   57537 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-942912 -n pause-942912
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-942912 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-942912 logs -n 25: (4.043675988s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-999005 sudo find    | kubenet-999005            | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | /etc/crio -type f -exec sh -c  |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                           |         |         |                     |                     |
	| ssh     | -p kubenet-999005 sudo crio    | kubenet-999005            | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | config                         |                           |         |         |                     |                     |
	| delete  | -p kubenet-999005              | kubenet-999005            | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC | 03 Apr 25 19:17 UTC |
	| start   | -p false-999005 --memory=2048  | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --cni=false --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo cat       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | /etc/nsswitch.conf             |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo cat       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | /etc/hosts                     |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo cat       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | /etc/resolv.conf               |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo crictl    | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | pods                           |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo crictl ps | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | --all                          |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo find      | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | /etc/cni -type f -exec sh -c   |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo ip a s    | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	| ssh     | -p false-999005 sudo ip r s    | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	| ssh     | -p false-999005 sudo           | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | iptables-save                  |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo iptables  | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | -t nat -L -n -v                |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo systemctl | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | status kubelet --all --full    |                           |         |         |                     |                     |
	|         | --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo systemctl | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | cat kubelet --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo           | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | journalctl -xeu kubelet --all  |                           |         |         |                     |                     |
	|         | --full --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo cat       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf   |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo cat       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | /var/lib/kubelet/config.yaml   |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo systemctl | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | status docker --all --full     |                           |         |         |                     |                     |
	|         | --no-pager                     |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-523797   | kubernetes-upgrade-523797 | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	| ssh     | -p false-999005 sudo systemctl | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | cat docker --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo cat       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | /etc/docker/daemon.json        |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo docker    | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | system info                    |                           |         |         |                     |                     |
	| ssh     | -p false-999005 sudo systemctl | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |                     |
	|         | status cri-docker --all --full |                           |         |         |                     |                     |
	|         | --no-pager                     |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 19:17:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 19:17:18.345141   59195 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:17:18.345296   59195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:17:18.345308   59195 out.go:358] Setting ErrFile to fd 2...
	I0403 19:17:18.345314   59195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:17:18.345600   59195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:17:18.346427   59195 out.go:352] Setting JSON to false
	I0403 19:17:18.347756   59195 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7183,"bootTime":1743700655,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:17:18.347844   59195 start.go:139] virtualization: kvm guest
	I0403 19:17:18.349797   59195 out.go:177] * [false-999005] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:17:18.351086   59195 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:17:18.351083   59195 notify.go:220] Checking for updates...
	I0403 19:17:18.352178   59195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:17:18.353275   59195 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:17:18.354510   59195 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:17:18.355598   59195 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:17:18.357851   59195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:17:18.359581   59195 config.go:182] Loaded profile config "kubernetes-upgrade-523797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0403 19:17:18.359714   59195 config.go:182] Loaded profile config "pause-942912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:17:18.359806   59195 config.go:182] Loaded profile config "stopped-upgrade-413283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0403 19:17:18.359894   59195 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:17:18.395738   59195 out.go:177] * Using the kvm2 driver based on user configuration
	I0403 19:17:18.396917   59195 start.go:297] selected driver: kvm2
	I0403 19:17:18.396932   59195 start.go:901] validating driver "kvm2" against <nil>
	I0403 19:17:18.396944   59195 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:17:18.398641   59195 out.go:201] 
	W0403 19:17:18.399730   59195 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0403 19:17:18.400709   59195 out.go:201] 
	I0403 19:17:19.561320   54806 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0403 19:17:19.561478   54806 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0403 19:17:19.562998   54806 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0403 19:17:19.563074   54806 kubeadm.go:310] [preflight] Running pre-flight checks
	I0403 19:17:19.563169   54806 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0403 19:17:19.563280   54806 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0403 19:17:19.563427   54806 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0403 19:17:19.563533   54806 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0403 19:17:19.565159   54806 out.go:235]   - Generating certificates and keys ...
	I0403 19:17:19.565266   54806 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0403 19:17:19.565376   54806 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0403 19:17:19.565503   54806 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0403 19:17:19.565584   54806 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0403 19:17:19.565683   54806 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0403 19:17:19.565753   54806 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0403 19:17:19.565839   54806 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0403 19:17:19.565918   54806 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0403 19:17:19.566039   54806 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0403 19:17:19.566171   54806 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0403 19:17:19.566242   54806 kubeadm.go:310] [certs] Using the existing "sa" key
	I0403 19:17:19.566325   54806 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0403 19:17:19.566400   54806 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0403 19:17:19.566475   54806 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0403 19:17:19.566537   54806 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0403 19:17:19.566581   54806 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0403 19:17:19.566724   54806 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0403 19:17:19.566846   54806 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0403 19:17:19.566917   54806 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0403 19:17:19.567030   54806 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0403 19:17:19.569083   54806 out.go:235]   - Booting up control plane ...
	I0403 19:17:19.569224   54806 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0403 19:17:19.569358   54806 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0403 19:17:19.569447   54806 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0403 19:17:19.569547   54806 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0403 19:17:19.569796   54806 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0403 19:17:19.569855   54806 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0403 19:17:19.569965   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:17:19.570271   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:17:19.570361   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:17:19.570602   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:17:19.570686   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:17:19.570985   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:17:19.571082   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:17:19.571339   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:17:19.571434   54806 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:17:19.571690   54806 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:17:19.571702   54806 kubeadm.go:310] 
	I0403 19:17:19.571751   54806 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0403 19:17:19.571809   54806 kubeadm.go:310] 		timed out waiting for the condition
	I0403 19:17:19.571819   54806 kubeadm.go:310] 
	I0403 19:17:19.571864   54806 kubeadm.go:310] 	This error is likely caused by:
	I0403 19:17:19.571912   54806 kubeadm.go:310] 		- The kubelet is not running
	I0403 19:17:19.572058   54806 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0403 19:17:19.572068   54806 kubeadm.go:310] 
	I0403 19:17:19.572183   54806 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0403 19:17:19.572229   54806 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0403 19:17:19.572272   54806 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0403 19:17:19.572281   54806 kubeadm.go:310] 
	I0403 19:17:19.572413   54806 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0403 19:17:19.572522   54806 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0403 19:17:19.572532   54806 kubeadm.go:310] 
	I0403 19:17:19.572668   54806 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0403 19:17:19.572744   54806 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0403 19:17:19.572808   54806 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0403 19:17:19.572870   54806 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0403 19:17:19.572927   54806 kubeadm.go:394] duration metric: took 3m55.49680016s to StartCluster
	I0403 19:17:19.572982   54806 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:17:19.573031   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:17:19.573084   54806 kubeadm.go:310] 
	I0403 19:17:19.628411   54806 cri.go:89] found id: ""
	I0403 19:17:19.628448   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.628460   54806 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:17:19.628469   54806 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:17:19.628556   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:17:19.670445   54806 cri.go:89] found id: ""
	I0403 19:17:19.670467   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.670475   54806 logs.go:284] No container was found matching "etcd"
	I0403 19:17:19.670481   54806 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:17:19.670536   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:17:19.718817   54806 cri.go:89] found id: ""
	I0403 19:17:19.718864   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.718876   54806 logs.go:284] No container was found matching "coredns"
	I0403 19:17:19.718885   54806 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:17:19.718946   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:17:19.769896   54806 cri.go:89] found id: ""
	I0403 19:17:19.769924   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.769945   54806 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:17:19.769953   54806 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:17:19.770011   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:17:19.818772   54806 cri.go:89] found id: ""
	I0403 19:17:19.818801   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.818812   54806 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:17:19.818839   54806 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:17:19.818904   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:17:19.869079   54806 cri.go:89] found id: ""
	I0403 19:17:19.869106   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.869117   54806 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:17:19.869128   54806 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:17:19.869205   54806 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:17:19.911846   54806 cri.go:89] found id: ""
	I0403 19:17:19.911875   54806 logs.go:282] 0 containers: []
	W0403 19:17:19.911887   54806 logs.go:284] No container was found matching "kindnet"
	I0403 19:17:19.911897   54806 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:17:19.911910   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:17:20.075082   54806 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:17:20.075112   54806 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:17:20.075127   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:17:20.234441   54806 logs.go:123] Gathering logs for container status ...
	I0403 19:17:20.234484   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:17:20.287481   54806 logs.go:123] Gathering logs for kubelet ...
	I0403 19:17:20.287510   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:17:20.351628   54806 logs.go:123] Gathering logs for dmesg ...
	I0403 19:17:20.351705   54806 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0403 19:17:20.369787   54806 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0403 19:17:20.369863   54806 out.go:270] * 
	W0403 19:17:20.369930   54806 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:17:20.369948   54806 out.go:270] * 
	W0403 19:17:20.371150   54806 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0403 19:17:20.374570   54806 out.go:201] 
	W0403 19:17:20.375914   54806 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:17:20.376194   54806 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0403 19:17:20.376230   54806 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0403 19:17:17.154598   57537 addons.go:514] duration metric: took 2.966246ms for enable addons: enabled=[]
	I0403 19:17:17.154738   57537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:17:17.378189   57537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:17:17.401276   57537 node_ready.go:35] waiting up to 6m0s for node "pause-942912" to be "Ready" ...
	I0403 19:17:17.405201   57537 node_ready.go:49] node "pause-942912" has status "Ready":"True"
	I0403 19:17:17.405227   57537 node_ready.go:38] duration metric: took 3.919411ms for node "pause-942912" to be "Ready" ...
	I0403 19:17:17.405238   57537 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:17:17.409158   57537 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-bjbjm" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:17.509322   57537 pod_ready.go:93] pod "coredns-668d6bf9bc-bjbjm" in "kube-system" namespace has status "Ready":"True"
	I0403 19:17:17.509350   57537 pod_ready.go:82] duration metric: took 100.16519ms for pod "coredns-668d6bf9bc-bjbjm" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:17.509364   57537 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-942912" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:17.908473   57537 pod_ready.go:93] pod "etcd-pause-942912" in "kube-system" namespace has status "Ready":"True"
	I0403 19:17:17.908500   57537 pod_ready.go:82] duration metric: took 399.128286ms for pod "etcd-pause-942912" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:17.908513   57537 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-942912" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:18.308424   57537 pod_ready.go:93] pod "kube-apiserver-pause-942912" in "kube-system" namespace has status "Ready":"True"
	I0403 19:17:18.308451   57537 pod_ready.go:82] duration metric: took 399.929376ms for pod "kube-apiserver-pause-942912" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:18.308464   57537 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-942912" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:18.719281   57537 pod_ready.go:93] pod "kube-controller-manager-pause-942912" in "kube-system" namespace has status "Ready":"True"
	I0403 19:17:18.719315   57537 pod_ready.go:82] duration metric: took 410.840032ms for pod "kube-controller-manager-pause-942912" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:18.719335   57537 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mqhzs" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:19.111056   57537 pod_ready.go:93] pod "kube-proxy-mqhzs" in "kube-system" namespace has status "Ready":"True"
	I0403 19:17:19.111085   57537 pod_ready.go:82] duration metric: took 391.740984ms for pod "kube-proxy-mqhzs" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:19.111099   57537 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-942912" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:19.508554   57537 pod_ready.go:93] pod "kube-scheduler-pause-942912" in "kube-system" namespace has status "Ready":"True"
	I0403 19:17:19.508575   57537 pod_ready.go:82] duration metric: took 397.468613ms for pod "kube-scheduler-pause-942912" in "kube-system" namespace to be "Ready" ...
	I0403 19:17:19.508587   57537 pod_ready.go:39] duration metric: took 2.103333535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:17:19.508607   57537 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:17:19.508661   57537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:17:19.524149   57537 api_server.go:72] duration metric: took 2.372551436s to wait for apiserver process to appear ...
	I0403 19:17:19.524187   57537 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:17:19.524211   57537 api_server.go:253] Checking apiserver healthz at https://192.168.50.237:8443/healthz ...
	I0403 19:17:19.528955   57537 api_server.go:279] https://192.168.50.237:8443/healthz returned 200:
	ok
	I0403 19:17:19.529919   57537 api_server.go:141] control plane version: v1.32.2
	I0403 19:17:19.529945   57537 api_server.go:131] duration metric: took 5.747235ms to wait for apiserver health ...
	I0403 19:17:19.529955   57537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:17:19.709278   57537 system_pods.go:59] 6 kube-system pods found
	I0403 19:17:19.709315   57537 system_pods.go:61] "coredns-668d6bf9bc-bjbjm" [5451a9c0-aaee-4cac-903b-11fc6b36dce0] Running
	I0403 19:17:19.709323   57537 system_pods.go:61] "etcd-pause-942912" [1d5acfa0-4890-46b5-ad36-960d89021ac0] Running
	I0403 19:17:19.709329   57537 system_pods.go:61] "kube-apiserver-pause-942912" [53038d54-8f92-451e-9765-37206f3fca0f] Running
	I0403 19:17:19.709334   57537 system_pods.go:61] "kube-controller-manager-pause-942912" [96e5d014-4a75-4895-9aec-84fb6cf6c7da] Running
	I0403 19:17:19.709339   57537 system_pods.go:61] "kube-proxy-mqhzs" [b2538b1a-3d84-45ad-9f64-907d33b4a586] Running
	I0403 19:17:19.709344   57537 system_pods.go:61] "kube-scheduler-pause-942912" [d117ce1a-6a87-49ab-becd-bcb1aefecbf8] Running
	I0403 19:17:19.709352   57537 system_pods.go:74] duration metric: took 179.389014ms to wait for pod list to return data ...
	I0403 19:17:19.709398   57537 default_sa.go:34] waiting for default service account to be created ...
	I0403 19:17:19.909772   57537 default_sa.go:45] found service account: "default"
	I0403 19:17:19.909799   57537 default_sa.go:55] duration metric: took 200.393164ms for default service account to be created ...
	I0403 19:17:19.909812   57537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0403 19:17:20.109227   57537 system_pods.go:86] 6 kube-system pods found
	I0403 19:17:20.109258   57537 system_pods.go:89] "coredns-668d6bf9bc-bjbjm" [5451a9c0-aaee-4cac-903b-11fc6b36dce0] Running
	I0403 19:17:20.109266   57537 system_pods.go:89] "etcd-pause-942912" [1d5acfa0-4890-46b5-ad36-960d89021ac0] Running
	I0403 19:17:20.109278   57537 system_pods.go:89] "kube-apiserver-pause-942912" [53038d54-8f92-451e-9765-37206f3fca0f] Running
	I0403 19:17:20.109283   57537 system_pods.go:89] "kube-controller-manager-pause-942912" [96e5d014-4a75-4895-9aec-84fb6cf6c7da] Running
	I0403 19:17:20.109288   57537 system_pods.go:89] "kube-proxy-mqhzs" [b2538b1a-3d84-45ad-9f64-907d33b4a586] Running
	I0403 19:17:20.109296   57537 system_pods.go:89] "kube-scheduler-pause-942912" [d117ce1a-6a87-49ab-becd-bcb1aefecbf8] Running
	I0403 19:17:20.109306   57537 system_pods.go:126] duration metric: took 199.486668ms to wait for k8s-apps to be running ...
	I0403 19:17:20.109318   57537 system_svc.go:44] waiting for kubelet service to be running ....
	I0403 19:17:20.109362   57537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:17:20.126093   57537 system_svc.go:56] duration metric: took 16.76684ms WaitForService to wait for kubelet
	I0403 19:17:20.126126   57537 kubeadm.go:582] duration metric: took 2.974533127s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:17:20.126149   57537 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:17:20.309999   57537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:17:20.310027   57537 node_conditions.go:123] node cpu capacity is 2
	I0403 19:17:20.310043   57537 node_conditions.go:105] duration metric: took 183.888551ms to run NodePressure ...
	I0403 19:17:20.310057   57537 start.go:241] waiting for startup goroutines ...
	I0403 19:17:20.310065   57537 start.go:246] waiting for cluster config update ...
	I0403 19:17:20.310075   57537 start.go:255] writing updated cluster config ...
	I0403 19:17:20.310436   57537 ssh_runner.go:195] Run: rm -f paused
	I0403 19:17:20.376860   57537 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0403 19:17:20.380361   54806 out.go:201] 
	I0403 19:17:20.381269   57537 out.go:177] * Done! kubectl is now configured to use "pause-942912" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.264136257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707841264099680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a65b519-b9a4-42e4-9578-854b4dba8275 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.264978710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4694d7e6-40aa-4d5f-986d-8ce4ad179610 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.265110505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4694d7e6-40aa-4d5f-986d-8ce4ad179610 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.265463504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de5d6b0d898e42a7cb1ffba73cf384aaa0cf3314cf48b898c5a78b6cf0ecee4e,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707820854533841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180b7b71ed77420e893b05c3668fc4fcb367132ac798ea84a2e93965c7783c1c,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707820862339872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e58d7ea0cc859964c7efdd5ab9138cec85489ca8099525bc20cb04635ac0ee,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707817046743510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce8b17a7294c3e97f6438e5ea654ffba48ee58a9c1f256bed8ed8e8485e4149,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707817049895006,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
5cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60f618d7fe4b2e9715541c7aaebf7ccbd3c27bcfdb35db86a4e8e711780e9,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707817018724083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2909a93c6d79b7316517f6b5b20adf00529bb1c64ee5c47a3d711ad20edcf1,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707814556062393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707793624674760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707792850754423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a003c2044eb8d365c73424c3190d1a88beda7e1cec88f4a53f2bd5a001d5bcf,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707792777581812,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b57989f55ed3f0edbdb7d30bf887d260ac751cb23c487dd7438a22192bfe44e,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707792863367105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d56aa312e1b69a3a716da6b6f4a9b5192a9cb2a4a2e3a675574c4870c1d80f6,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707792730595453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd3a9ff5b410375e037e5a18f476cdb7748487391a9f890d505541d84f81319,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707792613461701,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 785cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4694d7e6-40aa-4d5f-986d-8ce4ad179610 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.331480631Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5502e77-55fa-452e-9c2b-f142c7a89231 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.331623406Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5502e77-55fa-452e-9c2b-f142c7a89231 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.333370589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=546e7088-daf3-467a-8c3b-6bb84d006c30 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.334278328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707841334235449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=546e7088-daf3-467a-8c3b-6bb84d006c30 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.338468159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b21cfeb0-b5d8-4b96-bbe3-5cdfb3a9e437 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.338535127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b21cfeb0-b5d8-4b96-bbe3-5cdfb3a9e437 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.338881034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de5d6b0d898e42a7cb1ffba73cf384aaa0cf3314cf48b898c5a78b6cf0ecee4e,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707820854533841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180b7b71ed77420e893b05c3668fc4fcb367132ac798ea84a2e93965c7783c1c,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707820862339872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e58d7ea0cc859964c7efdd5ab9138cec85489ca8099525bc20cb04635ac0ee,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707817046743510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce8b17a7294c3e97f6438e5ea654ffba48ee58a9c1f256bed8ed8e8485e4149,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707817049895006,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
5cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60f618d7fe4b2e9715541c7aaebf7ccbd3c27bcfdb35db86a4e8e711780e9,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707817018724083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2909a93c6d79b7316517f6b5b20adf00529bb1c64ee5c47a3d711ad20edcf1,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707814556062393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707793624674760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707792850754423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a003c2044eb8d365c73424c3190d1a88beda7e1cec88f4a53f2bd5a001d5bcf,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707792777581812,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b57989f55ed3f0edbdb7d30bf887d260ac751cb23c487dd7438a22192bfe44e,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707792863367105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d56aa312e1b69a3a716da6b6f4a9b5192a9cb2a4a2e3a675574c4870c1d80f6,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707792730595453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd3a9ff5b410375e037e5a18f476cdb7748487391a9f890d505541d84f81319,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707792613461701,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 785cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b21cfeb0-b5d8-4b96-bbe3-5cdfb3a9e437 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.395714623Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c7239b8-eefd-45f4-a9b6-264575c1b1d2 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.395828924Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c7239b8-eefd-45f4-a9b6-264575c1b1d2 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.397991068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8bc1c87-529c-456f-9458-f76b00762a9c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.398614772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707841398580437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8bc1c87-529c-456f-9458-f76b00762a9c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.399930550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7379ea0c-eb5f-4e29-a6a9-2c6f1263d177 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.400143733Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7379ea0c-eb5f-4e29-a6a9-2c6f1263d177 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.400736029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de5d6b0d898e42a7cb1ffba73cf384aaa0cf3314cf48b898c5a78b6cf0ecee4e,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707820854533841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180b7b71ed77420e893b05c3668fc4fcb367132ac798ea84a2e93965c7783c1c,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707820862339872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e58d7ea0cc859964c7efdd5ab9138cec85489ca8099525bc20cb04635ac0ee,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707817046743510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce8b17a7294c3e97f6438e5ea654ffba48ee58a9c1f256bed8ed8e8485e4149,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707817049895006,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
5cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60f618d7fe4b2e9715541c7aaebf7ccbd3c27bcfdb35db86a4e8e711780e9,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707817018724083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2909a93c6d79b7316517f6b5b20adf00529bb1c64ee5c47a3d711ad20edcf1,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707814556062393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707793624674760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707792850754423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a003c2044eb8d365c73424c3190d1a88beda7e1cec88f4a53f2bd5a001d5bcf,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707792777581812,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b57989f55ed3f0edbdb7d30bf887d260ac751cb23c487dd7438a22192bfe44e,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707792863367105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d56aa312e1b69a3a716da6b6f4a9b5192a9cb2a4a2e3a675574c4870c1d80f6,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707792730595453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd3a9ff5b410375e037e5a18f476cdb7748487391a9f890d505541d84f81319,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707792613461701,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 785cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7379ea0c-eb5f-4e29-a6a9-2c6f1263d177 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.462536845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=658860c7-dc49-4a92-915b-04318304a9d2 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.462618128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=658860c7-dc49-4a92-915b-04318304a9d2 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.464337988Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76820d19-cfca-4f53-b0d9-dc3ff3ac7b0e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.464839207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707841464804746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76820d19-cfca-4f53-b0d9-dc3ff3ac7b0e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.465707034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29c9117e-9580-429a-b33e-5aabf800fcea name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.465778559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29c9117e-9580-429a-b33e-5aabf800fcea name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:21 pause-942912 crio[2376]: time="2025-04-03 19:17:21.466108620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de5d6b0d898e42a7cb1ffba73cf384aaa0cf3314cf48b898c5a78b6cf0ecee4e,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707820854533841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180b7b71ed77420e893b05c3668fc4fcb367132ac798ea84a2e93965c7783c1c,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707820862339872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e58d7ea0cc859964c7efdd5ab9138cec85489ca8099525bc20cb04635ac0ee,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707817046743510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce8b17a7294c3e97f6438e5ea654ffba48ee58a9c1f256bed8ed8e8485e4149,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707817049895006,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
5cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60f618d7fe4b2e9715541c7aaebf7ccbd3c27bcfdb35db86a4e8e711780e9,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707817018724083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2909a93c6d79b7316517f6b5b20adf00529bb1c64ee5c47a3d711ad20edcf1,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707814556062393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707793624674760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707792850754423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a003c2044eb8d365c73424c3190d1a88beda7e1cec88f4a53f2bd5a001d5bcf,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707792777581812,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b57989f55ed3f0edbdb7d30bf887d260ac751cb23c487dd7438a22192bfe44e,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707792863367105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d56aa312e1b69a3a716da6b6f4a9b5192a9cb2a4a2e3a675574c4870c1d80f6,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707792730595453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd3a9ff5b410375e037e5a18f476cdb7748487391a9f890d505541d84f81319,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707792613461701,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 785cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29c9117e-9580-429a-b33e-5aabf800fcea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	180b7b71ed774       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   20 seconds ago      Running             coredns                   2                   8dc7fcf379e0e       coredns-668d6bf9bc-bjbjm
	de5d6b0d898e4       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   20 seconds ago      Running             kube-proxy                2                   670fafb8c0ef9       kube-proxy-mqhzs
	dce8b17a7294c       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   24 seconds ago      Running             kube-apiserver            2                   e8ecf26b53beb       kube-apiserver-pause-942912
	20e58d7ea0cc8       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   24 seconds ago      Running             kube-controller-manager   2                   64efbc9b50352       kube-controller-manager-pause-942912
	68e60f618d7fe       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   24 seconds ago      Running             etcd                      2                   4e1eae40b00a6       etcd-pause-942912
	9a2909a93c6d7       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   27 seconds ago      Running             kube-scheduler            2                   82947c2a61b75       kube-scheduler-pause-942912
	c189bb593727f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   47 seconds ago      Exited              coredns                   1                   8dc7fcf379e0e       coredns-668d6bf9bc-bjbjm
	5b57989f55ed3       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   48 seconds ago      Exited              kube-scheduler            1                   82947c2a61b75       kube-scheduler-pause-942912
	aa5533e25d234       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   48 seconds ago      Exited              kube-proxy                1                   670fafb8c0ef9       kube-proxy-mqhzs
	9a003c2044eb8       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   48 seconds ago      Exited              etcd                      1                   4e1eae40b00a6       etcd-pause-942912
	8d56aa312e1b6       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   48 seconds ago      Exited              kube-controller-manager   1                   64efbc9b50352       kube-controller-manager-pause-942912
	dcd3a9ff5b410       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   48 seconds ago      Exited              kube-apiserver            1                   e8ecf26b53beb       kube-apiserver-pause-942912
	
	
	==> coredns [180b7b71ed77420e893b05c3668fc4fcb367132ac798ea84a2e93965c7783c1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41780 - 5213 "HINFO IN 864490175101335737.1691666836178847054. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016178405s
	
	
	==> coredns [c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59436 - 51308 "HINFO IN 2805605650235843080.4615053983371874143. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008941765s
	
	
	==> describe nodes <==
	Name:               pause-942912
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-942912
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053
	                    minikube.k8s.io/name=pause-942912
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_03T19_15_50_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 03 Apr 2025 19:15:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-942912
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 03 Apr 2025 19:17:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 03 Apr 2025 19:17:00 +0000   Thu, 03 Apr 2025 19:15:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 03 Apr 2025 19:17:00 +0000   Thu, 03 Apr 2025 19:15:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 03 Apr 2025 19:17:00 +0000   Thu, 03 Apr 2025 19:15:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 03 Apr 2025 19:17:00 +0000   Thu, 03 Apr 2025 19:15:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.237
	  Hostname:    pause-942912
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb23564742054661a2f7d1256fd8bb69
	  System UUID:                fb235647-4205-4661-a2f7-d1256fd8bb69
	  Boot ID:                    b8c990b7-c200-41ab-ab8d-d7237f26c8d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-bjbjm                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     88s
	  kube-system                 etcd-pause-942912                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         95s
	  kube-system                 kube-apiserver-pause-942912             250m (12%)    0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-controller-manager-pause-942912    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-mqhzs                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-scheduler-pause-942912             100m (5%)     0 (0%)      0 (0%)           0 (0%)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 86s                  kube-proxy       
	  Normal  Starting                 22s                  kube-proxy       
	  Normal  Starting                 46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node pause-942912 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node pause-942912 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x7 over 101s)  kubelet          Node pause-942912 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    93s                  kubelet          Node pause-942912 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  93s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  93s                  kubelet          Node pause-942912 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     93s                  kubelet          Node pause-942912 status is now: NodeHasSufficientPID
	  Normal  NodeReady                93s                  kubelet          Node pause-942912 status is now: NodeReady
	  Normal  Starting                 93s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           89s                  node-controller  Node pause-942912 event: Registered Node pause-942912 in Controller
	  Normal  RegisteredNode           44s                  node-controller  Node pause-942912 event: Registered Node pause-942912 in Controller
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)    kubelet          Node pause-942912 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)    kubelet          Node pause-942912 status is now: NodeHasSufficientMemory
	  Normal  Starting                 27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)    kubelet          Node pause-942912 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                  node-controller  Node pause-942912 event: Registered Node pause-942912 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.274194] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.063673] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058781] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.206639] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.118284] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.261332] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.197850] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.363369] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.077912] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.556160] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.095711] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.253398] systemd-fstab-generator[1375]: Ignoring "noauto" option for root device
	[  +0.115626] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 3 19:16] kauditd_printk_skb: 88 callbacks suppressed
	[ +24.361638] systemd-fstab-generator[2302]: Ignoring "noauto" option for root device
	[  +0.194615] systemd-fstab-generator[2314]: Ignoring "noauto" option for root device
	[  +0.194624] systemd-fstab-generator[2328]: Ignoring "noauto" option for root device
	[  +0.151759] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +0.302919] systemd-fstab-generator[2368]: Ignoring "noauto" option for root device
	[  +1.797621] systemd-fstab-generator[2998]: Ignoring "noauto" option for root device
	[  +3.194952] kauditd_printk_skb: 195 callbacks suppressed
	[ +19.462808] systemd-fstab-generator[3510]: Ignoring "noauto" option for root device
	[Apr 3 19:17] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.154386] systemd-fstab-generator[3892]: Ignoring "noauto" option for root device
	
	
	==> etcd [68e60f618d7fe4b2e9715541c7aaebf7ccbd3c27bcfdb35db86a4e8e711780e9] <==
	{"level":"info","ts":"2025-04-03T19:17:02.650762Z","caller":"traceutil/trace.go:171","msg":"trace[1854960205] linearizableReadLoop","detail":"{readStateIndex:594; appliedIndex:593; }","duration":"265.438811ms","start":"2025-04-03T19:17:02.385304Z","end":"2025-04-03T19:17:02.650743Z","steps":["trace[1854960205] 'read index received'  (duration: 265.375455ms)","trace[1854960205] 'applied index is now lower than readState.Index'  (duration: 62.743µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-03T19:17:02.650864Z","caller":"traceutil/trace.go:171","msg":"trace[142291753] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"422.634762ms","start":"2025-04-03T19:17:02.228222Z","end":"2025-04-03T19:17:02.650857Z","steps":["trace[142291753] 'process raft request'  (duration: 422.390381ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:02.651173Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:17:02.228204Z","time spent":"422.68159ms","remote":"127.0.0.1:43796","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:381 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"warn","ts":"2025-04-03T19:17:02.651432Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.119397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-04-03T19:17:02.651479Z","caller":"traceutil/trace.go:171","msg":"trace[770933996] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/pod-garbage-collector; range_end:; response_count:1; response_revision:552; }","duration":"266.190345ms","start":"2025-04-03T19:17:02.385280Z","end":"2025-04-03T19:17:02.651470Z","steps":["trace[770933996] 'agreement among raft nodes before linearized reading'  (duration: 266.100088ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:02.651628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.273035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T19:17:02.651664Z","caller":"traceutil/trace.go:171","msg":"trace[1173148403] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:552; }","duration":"266.319475ms","start":"2025-04-03T19:17:02.385337Z","end":"2025-04-03T19:17:02.651657Z","steps":["trace[1173148403] 'agreement among raft nodes before linearized reading'  (duration: 266.270878ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:02.651861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.399922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T19:17:02.651900Z","caller":"traceutil/trace.go:171","msg":"trace[1989764075] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:552; }","duration":"266.44866ms","start":"2025-04-03T19:17:02.385444Z","end":"2025-04-03T19:17:02.651893Z","steps":["trace[1989764075] 'agreement among raft nodes before linearized reading'  (duration: 266.395789ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:03.213363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.367075ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16150073177975122828 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/kube-proxy\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/kube-proxy\" value_size:115 >> failure:<>>","response":"size:5"}
	{"level":"info","ts":"2025-04-03T19:17:03.213565Z","caller":"traceutil/trace.go:171","msg":"trace[1631560445] linearizableReadLoop","detail":"{readStateIndex:595; appliedIndex:594; }","duration":"553.380302ms","start":"2025-04-03T19:17:02.660168Z","end":"2025-04-03T19:17:03.213548Z","steps":["trace[1631560445] 'read index received'  (duration: 139.674535ms)","trace[1631560445] 'applied index is now lower than readState.Index'  (duration: 413.704116ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-03T19:17:03.213609Z","caller":"traceutil/trace.go:171","msg":"trace[1618896695] transaction","detail":"{read_only:false; number_of_response:0; response_revision:552; }","duration":"554.04049ms","start":"2025-04-03T19:17:02.659547Z","end":"2025-04-03T19:17:03.213587Z","steps":["trace[1618896695] 'process raft request'  (duration: 140.36257ms)","trace[1618896695] 'compare'  (duration: 413.267103ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-03T19:17:03.213714Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:17:02.659531Z","time spent":"554.144734ms","remote":"127.0.0.1:43552","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/kube-proxy\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/kube-proxy\" value_size:115 >> failure:<>"}
	{"level":"warn","ts":"2025-04-03T19:17:03.213789Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"553.566479ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-03T19:17:03.213839Z","caller":"traceutil/trace.go:171","msg":"trace[1288001019] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:552; }","duration":"553.683115ms","start":"2025-04-03T19:17:02.660147Z","end":"2025-04-03T19:17:03.213830Z","steps":["trace[1288001019] 'agreement among raft nodes before linearized reading'  (duration: 553.493016ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:03.213892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:17:02.660135Z","time spent":"553.746254ms","remote":"127.0.0.1:43552","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":218,"request content":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 "}
	{"level":"info","ts":"2025-04-03T19:17:03.342451Z","caller":"traceutil/trace.go:171","msg":"trace[1681559598] transaction","detail":"{read_only:false; number_of_response:0; response_revision:552; }","duration":"117.774713ms","start":"2025-04-03T19:17:03.224649Z","end":"2025-04-03T19:17:03.342423Z","steps":["trace[1681559598] 'process raft request'  (duration: 117.663563ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T19:17:03.564698Z","caller":"traceutil/trace.go:171","msg":"trace[1425395186] linearizableReadLoop","detail":"{readStateIndex:598; appliedIndex:597; }","duration":"135.038272ms","start":"2025-04-03T19:17:03.429643Z","end":"2025-04-03T19:17:03.564681Z","steps":["trace[1425395186] 'read index received'  (duration: 134.931835ms)","trace[1425395186] 'applied index is now lower than readState.Index'  (duration: 105.922µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-03T19:17:03.564834Z","caller":"traceutil/trace.go:171","msg":"trace[1392161603] transaction","detail":"{read_only:false; number_of_response:0; response_revision:552; }","duration":"136.54235ms","start":"2025-04-03T19:17:03.428283Z","end":"2025-04-03T19:17:03.564825Z","steps":["trace[1392161603] 'process raft request'  (duration: 136.337781ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:03.564908Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.243625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-04-03T19:17:03.565945Z","caller":"traceutil/trace.go:171","msg":"trace[156784200] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:552; }","duration":"136.312574ms","start":"2025-04-03T19:17:03.429619Z","end":"2025-04-03T19:17:03.565931Z","steps":["trace[156784200] 'agreement among raft nodes before linearized reading'  (duration: 135.234007ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T19:17:04.822997Z","caller":"traceutil/trace.go:171","msg":"trace[1402701582] linearizableReadLoop","detail":"{readStateIndex:600; appliedIndex:599; }","duration":"238.609036ms","start":"2025-04-03T19:17:04.584371Z","end":"2025-04-03T19:17:04.822980Z","steps":["trace[1402701582] 'read index received'  (duration: 154.707485ms)","trace[1402701582] 'applied index is now lower than readState.Index'  (duration: 83.900583ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-03T19:17:04.823233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.855585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-bjbjm\" limit:1 ","response":"range_response_count:1 size:5149"}
	{"level":"info","ts":"2025-04-03T19:17:04.823281Z","caller":"traceutil/trace.go:171","msg":"trace[782465285] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-bjbjm; range_end:; response_count:1; response_revision:554; }","duration":"238.943664ms","start":"2025-04-03T19:17:04.584327Z","end":"2025-04-03T19:17:04.823270Z","steps":["trace[782465285] 'agreement among raft nodes before linearized reading'  (duration: 238.799822ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T19:17:04.823566Z","caller":"traceutil/trace.go:171","msg":"trace[666053067] transaction","detail":"{read_only:false; response_revision:554; number_of_response:1; }","duration":"289.19715ms","start":"2025-04-03T19:17:04.534357Z","end":"2025-04-03T19:17:04.823554Z","steps":["trace[666053067] 'process raft request'  (duration: 204.773168ms)","trace[666053067] 'compare'  (duration: 83.506418ms)"],"step_count":2}
	
	
	==> etcd [9a003c2044eb8d365c73424c3190d1a88beda7e1cec88f4a53f2bd5a001d5bcf] <==
	{"level":"warn","ts":"2025-04-03T19:16:37.188458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:16:36.664445Z","time spent":"524.012659ms","remote":"127.0.0.1:38202","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-04-03T19:16:37.188663Z","caller":"traceutil/trace.go:171","msg":"trace[731657256] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"517.489942ms","start":"2025-04-03T19:16:36.671161Z","end":"2025-04-03T19:16:37.188651Z","steps":["trace[731657256] 'process raft request'  (duration: 516.448738ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:16:37.188729Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:16:36.671141Z","time spent":"517.563776ms","remote":"127.0.0.1:37930","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-942912\" mod_revision:419 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-942912\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-942912\" > >"}
	{"level":"warn","ts":"2025-04-03T19:16:37.732131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"342.242837ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16150073177969010048 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:602095fd1568257f>","response":"size:41"}
	{"level":"info","ts":"2025-04-03T19:16:37.732384Z","caller":"traceutil/trace.go:171","msg":"trace[1369975838] linearizableReadLoop","detail":"{readStateIndex:449; appliedIndex:447; }","duration":"539.510461ms","start":"2025-04-03T19:16:37.192856Z","end":"2025-04-03T19:16:37.732367Z","steps":["trace[1369975838] 'read index received'  (duration: 196.977998ms)","trace[1369975838] 'applied index is now lower than readState.Index'  (duration: 342.531963ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-03T19:16:37.732470Z","caller":"traceutil/trace.go:171","msg":"trace[1788893950] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"541.11569ms","start":"2025-04-03T19:16:37.191336Z","end":"2025-04-03T19:16:37.732452Z","steps":["trace[1788893950] 'process raft request'  (duration: 540.943124ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:16:37.732553Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:16:37.191273Z","time spent":"541.235892ms","remote":"127.0.0.1:38202","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":534,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-942912.1832e500a3da667a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-942912.1832e500a3da667a\" value_size:462 lease:6926701141114234235 >> failure:<>"}
	{"level":"warn","ts":"2025-04-03T19:16:37.732731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"539.864482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-942912\" limit:1 ","response":"range_response_count:1 size:6988"}
	{"level":"info","ts":"2025-04-03T19:16:37.732821Z","caller":"traceutil/trace.go:171","msg":"trace[265576591] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-942912; range_end:; response_count:1; response_revision:425; }","duration":"540.012701ms","start":"2025-04-03T19:16:37.192798Z","end":"2025-04-03T19:16:37.732810Z","steps":["trace[265576591] 'agreement among raft nodes before linearized reading'  (duration: 539.689152ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:16:37.732892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:16:37.192787Z","time spent":"540.091964ms","remote":"127.0.0.1:37850","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":1,"response size":7011,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-942912\" limit:1 "}
	{"level":"warn","ts":"2025-04-03T19:16:37.732965Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.840271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:1 size:442"}
	{"level":"warn","ts":"2025-04-03T19:16:37.733202Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.301861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T19:16:37.733275Z","caller":"traceutil/trace.go:171","msg":"trace[710457659] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:425; }","duration":"255.402308ms","start":"2025-04-03T19:16:37.477865Z","end":"2025-04-03T19:16:37.733267Z","steps":["trace[710457659] 'agreement among raft nodes before linearized reading'  (duration: 255.309613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T19:16:37.733230Z","caller":"traceutil/trace.go:171","msg":"trace[1613584348] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:425; }","duration":"258.129062ms","start":"2025-04-03T19:16:37.475091Z","end":"2025-04-03T19:16:37.733220Z","steps":["trace[1613584348] 'agreement among raft nodes before linearized reading'  (duration: 257.81234ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:16:37.732411Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:16:37.041305Z","time spent":"691.101767ms","remote":"127.0.0.1:37754","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-04-03T19:16:44.673134Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-03T19:16:44.673224Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-942912","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.237:2380"],"advertise-client-urls":["https://192.168.50.237:2379"]}
	{"level":"warn","ts":"2025-04-03T19:16:44.673310Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-03T19:16:44.673463Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-03T19:16:44.715890Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.237:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-03T19:16:44.715964Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.237:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-03T19:16:44.716092Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2850e06a3711e020","current-leader-member-id":"2850e06a3711e020"}
	{"level":"info","ts":"2025-04-03T19:16:44.721693Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.50.237:2380"}
	{"level":"info","ts":"2025-04-03T19:16:44.721881Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.50.237:2380"}
	{"level":"info","ts":"2025-04-03T19:16:44.721915Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-942912","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.237:2380"],"advertise-client-urls":["https://192.168.50.237:2379"]}
	
	
	==> kernel <==
	 19:17:24 up 2 min,  0 users,  load average: 1.06, 0.57, 0.22
	Linux pause-942912 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dcd3a9ff5b410375e037e5a18f476cdb7748487391a9f890d505541d84f81319] <==
	W0403 19:16:54.150334       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.165753       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.171243       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.175683       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.205463       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.249106       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.300703       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.333426       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.337777       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.372480       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.375933       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.400474       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.426458       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.487224       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.508811       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.516448       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.532742       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.548726       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.672929       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.704580       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.733480       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.749533       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.840931       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.841276       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.971418       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dce8b17a7294c3e97f6438e5ea654ffba48ee58a9c1f256bed8ed8e8485e4149] <==
	I0403 19:17:00.402972       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0403 19:17:00.403058       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0403 19:17:00.411811       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0403 19:17:00.411896       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0403 19:17:00.411996       1 aggregator.go:171] initial CRD sync complete...
	I0403 19:17:00.412083       1 autoregister_controller.go:144] Starting autoregister controller
	I0403 19:17:00.412104       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0403 19:17:00.412110       1 cache.go:39] Caches are synced for autoregister controller
	I0403 19:17:00.454839       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0403 19:17:00.454904       1 policy_source.go:240] refreshing policies
	I0403 19:17:00.491918       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0403 19:17:00.494002       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0403 19:17:00.494440       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0403 19:17:00.494614       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0403 19:17:00.519824       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0403 19:17:00.534694       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0403 19:17:00.590569       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0403 19:17:01.305694       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0403 19:17:01.803548       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0403 19:17:01.847477       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0403 19:17:03.348664       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0403 19:17:03.427454       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0403 19:17:05.051631       1 controller.go:615] quota admission added evaluator for: endpoints
	I0403 19:17:05.052630       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0403 19:17:05.056172       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [20e58d7ea0cc859964c7efdd5ab9138cec85489ca8099525bc20cb04635ac0ee] <==
	I0403 19:17:04.470105       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0403 19:17:04.476074       1 shared_informer.go:320] Caches are synced for disruption
	I0403 19:17:04.476163       1 shared_informer.go:320] Caches are synced for daemon sets
	I0403 19:17:04.476085       1 shared_informer.go:320] Caches are synced for garbage collector
	I0403 19:17:04.476246       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0403 19:17:04.476266       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0403 19:17:04.477877       1 shared_informer.go:320] Caches are synced for garbage collector
	I0403 19:17:04.477953       1 shared_informer.go:320] Caches are synced for deployment
	I0403 19:17:04.481046       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0403 19:17:04.481142       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.623µs"
	I0403 19:17:04.482812       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0403 19:17:04.486580       1 shared_informer.go:320] Caches are synced for taint
	I0403 19:17:04.486698       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0403 19:17:04.486781       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-942912"
	I0403 19:17:04.486925       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0403 19:17:04.492686       1 shared_informer.go:320] Caches are synced for namespace
	I0403 19:17:04.493932       1 shared_informer.go:320] Caches are synced for crt configmap
	I0403 19:17:04.499293       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0403 19:17:04.502672       1 shared_informer.go:320] Caches are synced for service account
	I0403 19:17:04.518944       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0403 19:17:04.524499       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0403 19:17:04.526436       1 shared_informer.go:320] Caches are synced for endpoint
	I0403 19:17:04.529547       1 shared_informer.go:320] Caches are synced for resource quota
	I0403 19:17:05.068225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="30.515718ms"
	I0403 19:17:05.068485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.779µs"
	
	
	==> kube-controller-manager [8d56aa312e1b69a3a716da6b6f4a9b5192a9cb2a4a2e3a675574c4870c1d80f6] <==
	I0403 19:16:39.790759       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-942912"
	I0403 19:16:39.790767       1 shared_informer.go:320] Caches are synced for resource quota
	I0403 19:16:39.792333       1 shared_informer.go:320] Caches are synced for resource quota
	I0403 19:16:39.792448       1 shared_informer.go:320] Caches are synced for persistent volume
	I0403 19:16:39.795675       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0403 19:16:39.799097       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0403 19:16:39.800400       1 shared_informer.go:320] Caches are synced for endpoint
	I0403 19:16:39.802499       1 shared_informer.go:320] Caches are synced for job
	I0403 19:16:39.806846       1 shared_informer.go:320] Caches are synced for garbage collector
	I0403 19:16:39.811138       1 shared_informer.go:320] Caches are synced for HPA
	I0403 19:16:39.814463       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0403 19:16:39.815672       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0403 19:16:39.816841       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0403 19:16:39.818031       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0403 19:16:39.832092       1 shared_informer.go:320] Caches are synced for taint
	I0403 19:16:39.832190       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0403 19:16:39.832263       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-942912"
	I0403 19:16:39.832306       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0403 19:16:39.832365       1 shared_informer.go:320] Caches are synced for disruption
	I0403 19:16:39.832801       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0403 19:16:39.833773       1 shared_informer.go:320] Caches are synced for crt configmap
	I0403 19:16:39.844868       1 shared_informer.go:320] Caches are synced for garbage collector
	I0403 19:16:39.844936       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0403 19:16:39.844946       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0403 19:16:44.584579       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="101.195µs"
	
	
	==> kube-proxy [aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0403 19:16:35.068796       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0403 19:16:36.614443       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.237"]
	E0403 19:16:36.614622       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0403 19:16:36.653317       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0403 19:16:36.653363       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0403 19:16:36.653386       1 server_linux.go:170] "Using iptables Proxier"
	I0403 19:16:36.655862       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0403 19:16:36.656226       1 server.go:497] "Version info" version="v1.32.2"
	I0403 19:16:36.656248       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0403 19:16:36.658303       1 config.go:199] "Starting service config controller"
	I0403 19:16:36.658356       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0403 19:16:36.658381       1 config.go:105] "Starting endpoint slice config controller"
	I0403 19:16:36.658385       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0403 19:16:36.659143       1 config.go:329] "Starting node config controller"
	I0403 19:16:36.659167       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0403 19:16:36.759233       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0403 19:16:36.759307       1 shared_informer.go:320] Caches are synced for service config
	I0403 19:16:36.759431       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [de5d6b0d898e42a7cb1ffba73cf384aaa0cf3314cf48b898c5a78b6cf0ecee4e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0403 19:17:01.096180       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0403 19:17:01.106405       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.237"]
	E0403 19:17:01.106528       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0403 19:17:01.144446       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0403 19:17:01.144520       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0403 19:17:01.144556       1 server_linux.go:170] "Using iptables Proxier"
	I0403 19:17:01.146800       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0403 19:17:01.147258       1 server.go:497] "Version info" version="v1.32.2"
	I0403 19:17:01.147303       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0403 19:17:01.148707       1 config.go:199] "Starting service config controller"
	I0403 19:17:01.148787       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0403 19:17:01.148822       1 config.go:105] "Starting endpoint slice config controller"
	I0403 19:17:01.148838       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0403 19:17:01.149417       1 config.go:329] "Starting node config controller"
	I0403 19:17:01.149454       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0403 19:17:01.249668       1 shared_informer.go:320] Caches are synced for node config
	I0403 19:17:01.249712       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0403 19:17:01.249785       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5b57989f55ed3f0edbdb7d30bf887d260ac751cb23c487dd7438a22192bfe44e] <==
	I0403 19:16:34.815813       1 serving.go:386] Generated self-signed cert in-memory
	I0403 19:16:36.624118       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0403 19:16:36.624163       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0403 19:16:37.192999       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0403 19:16:37.193168       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0403 19:16:37.194189       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0403 19:16:37.194227       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0403 19:16:37.194307       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0403 19:16:37.194384       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0403 19:16:37.195533       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0403 19:16:37.196115       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0403 19:16:37.293706       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0403 19:16:37.294982       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0403 19:16:37.295201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0403 19:16:44.385754       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0403 19:16:44.385900       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0403 19:16:44.386110       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0403 19:16:44.386131       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0403 19:16:44.386156       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	E0403 19:16:44.386638       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9a2909a93c6d79b7316517f6b5b20adf00529bb1c64ee5c47a3d711ad20edcf1] <==
	W0403 19:17:00.366762       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0403 19:17:00.367495       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.367592       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0403 19:17:00.367620       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.367675       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0403 19:17:00.367703       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.367758       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0403 19:17:00.368091       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.368168       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0403 19:17:00.368200       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.368274       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0403 19:17:00.370074       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.370191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0403 19:17:00.370236       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.370312       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0403 19:17:00.370342       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.370818       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0403 19:17:00.370909       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.371069       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0403 19:17:00.371140       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.371326       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0403 19:17:00.371415       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.371560       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0403 19:17:00.371651       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0403 19:17:06.597761       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 03 19:16:59 pause-942912 kubelet[3516]: E0403 19:16:59.679093    3516 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-942912\" not found" node="pause-942912"
	Apr 03 19:16:59 pause-942912 kubelet[3516]: E0403 19:16:59.679651    3516 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-942912\" not found" node="pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.349827    3516 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: E0403 19:17:00.482877    3516 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-942912\" already exists" pod="kube-system/etcd-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.482916    3516 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: E0403 19:17:00.492667    3516 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-942912\" already exists" pod="kube-system/kube-apiserver-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.492707    3516 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: E0403 19:17:00.510259    3516 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-942912\" already exists" pod="kube-system/kube-controller-manager-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.510436    3516 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: E0403 19:17:00.518861    3516 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-942912\" already exists" pod="kube-system/kube-scheduler-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.519942    3516 apiserver.go:52] "Watching apiserver"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.542814    3516 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.578674    3516 kubelet_node_status.go:125] "Node was previously registered" node="pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.578753    3516 kubelet_node_status.go:79] "Successfully registered node" node="pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.578787    3516 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.580067    3516 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.582160    3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2538b1a-3d84-45ad-9f64-907d33b4a586-lib-modules\") pod \"kube-proxy-mqhzs\" (UID: \"b2538b1a-3d84-45ad-9f64-907d33b4a586\") " pod="kube-system/kube-proxy-mqhzs"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.582260    3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2538b1a-3d84-45ad-9f64-907d33b4a586-xtables-lock\") pod \"kube-proxy-mqhzs\" (UID: \"b2538b1a-3d84-45ad-9f64-907d33b4a586\") " pod="kube-system/kube-proxy-mqhzs"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.825916    3516 scope.go:117] "RemoveContainer" containerID="c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.826362    3516 scope.go:117] "RemoveContainer" containerID="aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39"
	Apr 03 19:17:05 pause-942912 kubelet[3516]: I0403 19:17:05.021588    3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 03 19:17:06 pause-942912 kubelet[3516]: E0403 19:17:06.668689    3516 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707826667298666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 19:17:06 pause-942912 kubelet[3516]: E0403 19:17:06.668879    3516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707826667298666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 19:17:16 pause-942912 kubelet[3516]: E0403 19:17:16.671587    3516 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707836671320646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 19:17:16 pause-942912 kubelet[3516]: E0403 19:17:16.671648    3516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707836671320646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-942912 -n pause-942912
helpers_test.go:261: (dbg) Run:  kubectl --context pause-942912 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-942912 -n pause-942912
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-942912 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-942912 logs -n 25: (1.531339765s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|----------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|----------|
	| ssh     | -p false-999005 sudo systemctl                       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | status cri-docker --all --full                       |                           |         |         |                     |          |
	|         | --no-pager                                           |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo systemctl                       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo cat                             | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo cat                             | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo                                 | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | cri-dockerd --version                                |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo systemctl                       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | status containerd --all --full                       |                           |         |         |                     |          |
	|         | --no-pager                                           |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo systemctl                       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | cat containerd --no-pager                            |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo cat                             | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo cat                             | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo                                 | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | containerd config dump                               |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo systemctl                       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | status crio --all --full                             |                           |         |         |                     |          |
	|         | --no-pager                                           |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo systemctl                       | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | cat crio --no-pager                                  |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo find                            | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |          |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |          |
	| ssh     | -p false-999005 sudo crio                            | false-999005              | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | config                                               |                           |         |         |                     |          |
	| start   | -p kubernetes-upgrade-523797                         | kubernetes-upgrade-523797 | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | --memory=2200                                        |                           |         |         |                     |          |
	|         | --kubernetes-version=v1.32.2                         |                           |         |         |                     |          |
	|         | --alsologtostderr                                    |                           |         |         |                     |          |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |          |
	|         | --container-runtime=crio                             |                           |         |         |                     |          |
	| ssh     | -p cilium-999005 sudo cat                            | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |          |
	| ssh     | -p cilium-999005 sudo cat                            | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | /etc/hosts                                           |                           |         |         |                     |          |
	| ssh     | -p cilium-999005 sudo cat                            | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |          |
	| ssh     | -p cilium-999005 sudo crictl                         | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | pods                                                 |                           |         |         |                     |          |
	| ssh     | -p cilium-999005 sudo crictl                         | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | ps --all                                             |                           |         |         |                     |          |
	| ssh     | -p cilium-999005 sudo find                           | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |          |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |          |
	| ssh     | -p cilium-999005 sudo ip a s                         | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	| ssh     | -p cilium-999005 sudo ip r s                         | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	| ssh     | -p cilium-999005 sudo                                | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | iptables-save                                        |                           |         |         |                     |          |
	| ssh     | -p cilium-999005 sudo iptables                       | cilium-999005             | jenkins | v1.35.0 | 03 Apr 25 19:17 UTC |          |
	|         | -t nat -L -n -v                                      |                           |         |         |                     |          |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 19:17:23
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 19:17:23.253849   59960 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:17:23.253956   59960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:17:23.253974   59960 out.go:358] Setting ErrFile to fd 2...
	I0403 19:17:23.253981   59960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:17:23.254210   59960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:17:23.254750   59960 out.go:352] Setting JSON to false
	I0403 19:17:23.255680   59960 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7188,"bootTime":1743700655,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:17:23.255788   59960 start.go:139] virtualization: kvm guest
	I0403 19:17:23.324577   59960 out.go:177] * [kubernetes-upgrade-523797] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:17:23.449013   59960 notify.go:220] Checking for updates...
	I0403 19:17:23.598966   59960 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:17:23.761467   59960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:17:23.961126   59960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:17:24.002475   59960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:17:24.025074   59960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:17:24.062074   59960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:17:24.188260   59960 config.go:182] Loaded profile config "kubernetes-upgrade-523797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0403 19:17:24.188705   59960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:17:24.188769   59960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:17:24.204482   59960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41761
	I0403 19:17:24.204897   59960 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:17:24.205404   59960 main.go:141] libmachine: Using API Version  1
	I0403 19:17:24.205429   59960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:17:24.205743   59960 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:17:24.205920   59960 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:17:24.206140   59960 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:17:24.206471   59960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:17:24.206522   59960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:17:24.221340   59960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39883
	I0403 19:17:24.221940   59960 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:17:24.222380   59960 main.go:141] libmachine: Using API Version  1
	I0403 19:17:24.222406   59960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:17:24.222701   59960 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:17:24.222887   59960 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:17:24.286493   59960 out.go:177] * Using the kvm2 driver based on existing profile
	I0403 19:17:24.321104   59960 start.go:297] selected driver: kvm2
	I0403 19:17:24.321124   59960 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-523797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:kubernetes-upgrade-523797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:17:24.321248   59960 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:17:24.322173   59960 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:17:24.322270   59960 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 19:17:24.343988   59960 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0403 19:17:24.344591   59960 cni.go:84] Creating CNI manager for ""
	I0403 19:17:24.344664   59960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:17:24.344716   59960 start.go:340] cluster config:
	{Name:kubernetes-upgrade-523797 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-523797 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:17:24.344912   59960 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:17:24.346989   59960 out.go:177] * Starting "kubernetes-upgrade-523797" primary control-plane node in "kubernetes-upgrade-523797" cluster
	I0403 19:17:24.348486   59960 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 19:17:24.348540   59960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0403 19:17:24.348552   59960 cache.go:56] Caching tarball of preloaded images
	I0403 19:17:24.348646   59960 preload.go:172] Found /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0403 19:17:24.348660   59960 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0403 19:17:24.348774   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kubernetes-upgrade-523797/config.json ...
	I0403 19:17:24.349020   59960 start.go:360] acquireMachinesLock for kubernetes-upgrade-523797: {Name:mk8972215f0ab94ca7966bf5adf18262e19bccd0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0403 19:17:24.349099   59960 start.go:364] duration metric: took 50.866µs to acquireMachinesLock for "kubernetes-upgrade-523797"
	I0403 19:17:24.349120   59960 start.go:96] Skipping create...Using existing machine configuration
	I0403 19:17:24.349129   59960 fix.go:54] fixHost starting: 
	I0403 19:17:24.349398   59960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:17:24.349440   59960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:17:24.370004   59960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0403 19:17:24.370440   59960 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:17:24.371210   59960 main.go:141] libmachine: Using API Version  1
	I0403 19:17:24.371232   59960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:17:24.371692   59960 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:17:24.371904   59960 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	I0403 19:17:24.372073   59960 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .GetState
	I0403 19:17:24.373950   59960 fix.go:112] recreateIfNeeded on kubernetes-upgrade-523797: state=Stopped err=<nil>
	I0403 19:17:24.373979   59960 main.go:141] libmachine: (kubernetes-upgrade-523797) Calling .DriverName
	W0403 19:17:24.374127   59960 fix.go:138] unexpected machine state, will restart: <nil>
	I0403 19:17:24.375495   59960 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-523797" ...
	
	
	==> CRI-O <==
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.023366737Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707846023333407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74f9cada-e8e2-4bca-9494-a9d66b951b20 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.024205055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a757a8bb-e2ac-496d-bc00-679a983f070e name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.024283449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a757a8bb-e2ac-496d-bc00-679a983f070e name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.024619574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de5d6b0d898e42a7cb1ffba73cf384aaa0cf3314cf48b898c5a78b6cf0ecee4e,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707820854533841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180b7b71ed77420e893b05c3668fc4fcb367132ac798ea84a2e93965c7783c1c,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707820862339872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e58d7ea0cc859964c7efdd5ab9138cec85489ca8099525bc20cb04635ac0ee,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707817046743510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce8b17a7294c3e97f6438e5ea654ffba48ee58a9c1f256bed8ed8e8485e4149,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707817049895006,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
5cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60f618d7fe4b2e9715541c7aaebf7ccbd3c27bcfdb35db86a4e8e711780e9,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707817018724083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2909a93c6d79b7316517f6b5b20adf00529bb1c64ee5c47a3d711ad20edcf1,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707814556062393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707793624674760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707792850754423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a003c2044eb8d365c73424c3190d1a88beda7e1cec88f4a53f2bd5a001d5bcf,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707792777581812,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b57989f55ed3f0edbdb7d30bf887d260ac751cb23c487dd7438a22192bfe44e,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707792863367105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d56aa312e1b69a3a716da6b6f4a9b5192a9cb2a4a2e3a675574c4870c1d80f6,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707792730595453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd3a9ff5b410375e037e5a18f476cdb7748487391a9f890d505541d84f81319,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707792613461701,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 785cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a757a8bb-e2ac-496d-bc00-679a983f070e name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.083121877Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93857c06-3fd5-4390-9fdf-7918e0a84f1e name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.083247462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93857c06-3fd5-4390-9fdf-7918e0a84f1e name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.084720666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83f3ee76-551b-4e2d-b478-7ff110221b82 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.085818823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707846085784290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83f3ee76-551b-4e2d-b478-7ff110221b82 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.086856342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42de427c-56ac-4e93-b60b-6cb03d9d8ba8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.086946845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42de427c-56ac-4e93-b60b-6cb03d9d8ba8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.087433839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de5d6b0d898e42a7cb1ffba73cf384aaa0cf3314cf48b898c5a78b6cf0ecee4e,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707820854533841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180b7b71ed77420e893b05c3668fc4fcb367132ac798ea84a2e93965c7783c1c,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707820862339872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e58d7ea0cc859964c7efdd5ab9138cec85489ca8099525bc20cb04635ac0ee,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707817046743510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce8b17a7294c3e97f6438e5ea654ffba48ee58a9c1f256bed8ed8e8485e4149,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707817049895006,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
5cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60f618d7fe4b2e9715541c7aaebf7ccbd3c27bcfdb35db86a4e8e711780e9,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707817018724083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2909a93c6d79b7316517f6b5b20adf00529bb1c64ee5c47a3d711ad20edcf1,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707814556062393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707793624674760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707792850754423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a003c2044eb8d365c73424c3190d1a88beda7e1cec88f4a53f2bd5a001d5bcf,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707792777581812,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b57989f55ed3f0edbdb7d30bf887d260ac751cb23c487dd7438a22192bfe44e,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707792863367105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d56aa312e1b69a3a716da6b6f4a9b5192a9cb2a4a2e3a675574c4870c1d80f6,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707792730595453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd3a9ff5b410375e037e5a18f476cdb7748487391a9f890d505541d84f81319,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707792613461701,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 785cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42de427c-56ac-4e93-b60b-6cb03d9d8ba8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.145810737Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f79fd49-ae08-47e3-8e7a-6c4698f2efef name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.145956443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f79fd49-ae08-47e3-8e7a-6c4698f2efef name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.150314810Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1431949f-9c83-410e-b769-f903ba612bd4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.151257518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707846151226664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1431949f-9c83-410e-b769-f903ba612bd4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.153684599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cf5fd5f-3ffb-48f7-a48d-e3924cf88ae9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.153982430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cf5fd5f-3ffb-48f7-a48d-e3924cf88ae9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.154892174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de5d6b0d898e42a7cb1ffba73cf384aaa0cf3314cf48b898c5a78b6cf0ecee4e,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707820854533841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180b7b71ed77420e893b05c3668fc4fcb367132ac798ea84a2e93965c7783c1c,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707820862339872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e58d7ea0cc859964c7efdd5ab9138cec85489ca8099525bc20cb04635ac0ee,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707817046743510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce8b17a7294c3e97f6438e5ea654ffba48ee58a9c1f256bed8ed8e8485e4149,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707817049895006,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
5cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60f618d7fe4b2e9715541c7aaebf7ccbd3c27bcfdb35db86a4e8e711780e9,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707817018724083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2909a93c6d79b7316517f6b5b20adf00529bb1c64ee5c47a3d711ad20edcf1,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707814556062393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707793624674760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707792850754423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a003c2044eb8d365c73424c3190d1a88beda7e1cec88f4a53f2bd5a001d5bcf,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707792777581812,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b57989f55ed3f0edbdb7d30bf887d260ac751cb23c487dd7438a22192bfe44e,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707792863367105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d56aa312e1b69a3a716da6b6f4a9b5192a9cb2a4a2e3a675574c4870c1d80f6,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707792730595453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd3a9ff5b410375e037e5a18f476cdb7748487391a9f890d505541d84f81319,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707792613461701,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 785cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cf5fd5f-3ffb-48f7-a48d-e3924cf88ae9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.207610238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2dadb3ca-4e64-43d7-8798-7b4d23276466 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.207685764Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2dadb3ca-4e64-43d7-8798-7b4d23276466 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.209215091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6154634b-1953-44b4-9a87-4f6f6fc8746e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.209564073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707846209545009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6154634b-1953-44b4-9a87-4f6f6fc8746e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.210278148Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4816289-289b-4792-8d54-43bad1366960 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.210353610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4816289-289b-4792-8d54-43bad1366960 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:17:26 pause-942912 crio[2376]: time="2025-04-03 19:17:26.210661346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de5d6b0d898e42a7cb1ffba73cf384aaa0cf3314cf48b898c5a78b6cf0ecee4e,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743707820854533841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180b7b71ed77420e893b05c3668fc4fcb367132ac798ea84a2e93965c7783c1c,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743707820862339872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e58d7ea0cc859964c7efdd5ab9138cec85489ca8099525bc20cb04635ac0ee,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743707817046743510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce8b17a7294c3e97f6438e5ea654ffba48ee58a9c1f256bed8ed8e8485e4149,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743707817049895006,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
5cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e60f618d7fe4b2e9715541c7aaebf7ccbd3c27bcfdb35db86a4e8e711780e9,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743707817018724083,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a2909a93c6d79b7316517f6b5b20adf00529bb1c64ee5c47a3d711ad20edcf1,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743707814556062393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.
kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224,PodSandboxId:8dc7fcf379e0ea227f32b3a20880085ea6bf1773339d4f944f332eafc36fab24,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743707793624674760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bjbjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5451a9c0-aaee-4cac-903b-11fc6b36dce0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a2
04d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39,PodSandboxId:670fafb8c0ef9b4fd184da69ea34791ca2ad99c8aa1744f65ddcb856e44224e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743707792850754423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mqhzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2538b1a-3d84-45ad-9f64-907d33b4a586,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a003c2044eb8d365c73424c3190d1a88beda7e1cec88f4a53f2bd5a001d5bcf,PodSandboxId:4e1eae40b00a6d0d84c657e4c1ed411b2cf86b2934fc3ff62764aa8ee4bf9f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743707792777581812,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-942912,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 211dd705b5a567bc2c4647db3d804cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b57989f55ed3f0edbdb7d30bf887d260ac751cb23c487dd7438a22192bfe44e,PodSandboxId:82947c2a61b7524cee94a2f4f6371b9f37a83df44cf5e4b11c2222773fc5806c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743707792863367105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-942912,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: d9816fe83cc28a788abb5d8f07ecc1f7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d56aa312e1b69a3a716da6b6f4a9b5192a9cb2a4a2e3a675574c4870c1d80f6,PodSandboxId:64efbc9b50352d32e4f3fdd22f679e8051abe2a074201d2713f376484218f976,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743707792730595453,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-942912,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c3e0716b5a766cbafd8b852e85321169,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd3a9ff5b410375e037e5a18f476cdb7748487391a9f890d505541d84f81319,PodSandboxId:e8ecf26b53beb1c587f77cb9e3279809bf0fc7cd89d8ed98d33a41702a7282ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743707792613461701,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-942912,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 785cbbfe76490d36e45146f28e5dffbe,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4816289-289b-4792-8d54-43bad1366960 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	180b7b71ed774       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   25 seconds ago      Running             coredns                   2                   8dc7fcf379e0e       coredns-668d6bf9bc-bjbjm
	de5d6b0d898e4       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   25 seconds ago      Running             kube-proxy                2                   670fafb8c0ef9       kube-proxy-mqhzs
	dce8b17a7294c       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   29 seconds ago      Running             kube-apiserver            2                   e8ecf26b53beb       kube-apiserver-pause-942912
	20e58d7ea0cc8       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   29 seconds ago      Running             kube-controller-manager   2                   64efbc9b50352       kube-controller-manager-pause-942912
	68e60f618d7fe       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   29 seconds ago      Running             etcd                      2                   4e1eae40b00a6       etcd-pause-942912
	9a2909a93c6d7       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   31 seconds ago      Running             kube-scheduler            2                   82947c2a61b75       kube-scheduler-pause-942912
	c189bb593727f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   52 seconds ago      Exited              coredns                   1                   8dc7fcf379e0e       coredns-668d6bf9bc-bjbjm
	5b57989f55ed3       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   53 seconds ago      Exited              kube-scheduler            1                   82947c2a61b75       kube-scheduler-pause-942912
	aa5533e25d234       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   53 seconds ago      Exited              kube-proxy                1                   670fafb8c0ef9       kube-proxy-mqhzs
	9a003c2044eb8       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   53 seconds ago      Exited              etcd                      1                   4e1eae40b00a6       etcd-pause-942912
	8d56aa312e1b6       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   53 seconds ago      Exited              kube-controller-manager   1                   64efbc9b50352       kube-controller-manager-pause-942912
	dcd3a9ff5b410       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   53 seconds ago      Exited              kube-apiserver            1                   e8ecf26b53beb       kube-apiserver-pause-942912
	
	
	==> coredns [180b7b71ed77420e893b05c3668fc4fcb367132ac798ea84a2e93965c7783c1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41780 - 5213 "HINFO IN 864490175101335737.1691666836178847054. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016178405s
	
	
	==> coredns [c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59436 - 51308 "HINFO IN 2805605650235843080.4615053983371874143. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008941765s
	
	
	==> describe nodes <==
	Name:               pause-942912
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-942912
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053
	                    minikube.k8s.io/name=pause-942912
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_03T19_15_50_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 03 Apr 2025 19:15:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-942912
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 03 Apr 2025 19:17:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 03 Apr 2025 19:17:00 +0000   Thu, 03 Apr 2025 19:15:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 03 Apr 2025 19:17:00 +0000   Thu, 03 Apr 2025 19:15:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 03 Apr 2025 19:17:00 +0000   Thu, 03 Apr 2025 19:15:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 03 Apr 2025 19:17:00 +0000   Thu, 03 Apr 2025 19:15:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.237
	  Hostname:    pause-942912
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb23564742054661a2f7d1256fd8bb69
	  System UUID:                fb235647-4205-4661-a2f7-d1256fd8bb69
	  Boot ID:                    b8c990b7-c200-41ab-ab8d-d7237f26c8d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-bjbjm                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     91s
	  kube-system                 etcd-pause-942912                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         98s
	  kube-system                 kube-apiserver-pause-942912             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-pause-942912    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-mqhzs                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-pause-942912             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 25s                  kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node pause-942912 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node pause-942912 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)  kubelet          Node pause-942912 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node pause-942912 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node pause-942912 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node pause-942912 status is now: NodeHasSufficientPID
	  Normal  NodeReady                96s                  kubelet          Node pause-942912 status is now: NodeReady
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           92s                  node-controller  Node pause-942912 event: Registered Node pause-942912 in Controller
	  Normal  RegisteredNode           47s                  node-controller  Node pause-942912 event: Registered Node pause-942912 in Controller
	  Normal  NodeHasNoDiskPressure    30s (x8 over 30s)    kubelet          Node pause-942912 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  30s (x8 over 30s)    kubelet          Node pause-942912 status is now: NodeHasSufficientMemory
	  Normal  Starting                 30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     30s (x7 over 30s)    kubelet          Node pause-942912 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22s                  node-controller  Node pause-942912 event: Registered Node pause-942912 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.274194] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.063673] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058781] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.206639] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.118284] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.261332] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.197850] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.363369] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.077912] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.556160] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.095711] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.253398] systemd-fstab-generator[1375]: Ignoring "noauto" option for root device
	[  +0.115626] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 3 19:16] kauditd_printk_skb: 88 callbacks suppressed
	[ +24.361638] systemd-fstab-generator[2302]: Ignoring "noauto" option for root device
	[  +0.194615] systemd-fstab-generator[2314]: Ignoring "noauto" option for root device
	[  +0.194624] systemd-fstab-generator[2328]: Ignoring "noauto" option for root device
	[  +0.151759] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +0.302919] systemd-fstab-generator[2368]: Ignoring "noauto" option for root device
	[  +1.797621] systemd-fstab-generator[2998]: Ignoring "noauto" option for root device
	[  +3.194952] kauditd_printk_skb: 195 callbacks suppressed
	[ +19.462808] systemd-fstab-generator[3510]: Ignoring "noauto" option for root device
	[Apr 3 19:17] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.154386] systemd-fstab-generator[3892]: Ignoring "noauto" option for root device
	
	
	==> etcd [68e60f618d7fe4b2e9715541c7aaebf7ccbd3c27bcfdb35db86a4e8e711780e9] <==
	{"level":"info","ts":"2025-04-03T19:17:02.650762Z","caller":"traceutil/trace.go:171","msg":"trace[1854960205] linearizableReadLoop","detail":"{readStateIndex:594; appliedIndex:593; }","duration":"265.438811ms","start":"2025-04-03T19:17:02.385304Z","end":"2025-04-03T19:17:02.650743Z","steps":["trace[1854960205] 'read index received'  (duration: 265.375455ms)","trace[1854960205] 'applied index is now lower than readState.Index'  (duration: 62.743µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-03T19:17:02.650864Z","caller":"traceutil/trace.go:171","msg":"trace[142291753] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"422.634762ms","start":"2025-04-03T19:17:02.228222Z","end":"2025-04-03T19:17:02.650857Z","steps":["trace[142291753] 'process raft request'  (duration: 422.390381ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:02.651173Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:17:02.228204Z","time spent":"422.68159ms","remote":"127.0.0.1:43796","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:381 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"warn","ts":"2025-04-03T19:17:02.651432Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.119397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-04-03T19:17:02.651479Z","caller":"traceutil/trace.go:171","msg":"trace[770933996] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/pod-garbage-collector; range_end:; response_count:1; response_revision:552; }","duration":"266.190345ms","start":"2025-04-03T19:17:02.385280Z","end":"2025-04-03T19:17:02.651470Z","steps":["trace[770933996] 'agreement among raft nodes before linearized reading'  (duration: 266.100088ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:02.651628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.273035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T19:17:02.651664Z","caller":"traceutil/trace.go:171","msg":"trace[1173148403] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:552; }","duration":"266.319475ms","start":"2025-04-03T19:17:02.385337Z","end":"2025-04-03T19:17:02.651657Z","steps":["trace[1173148403] 'agreement among raft nodes before linearized reading'  (duration: 266.270878ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:02.651861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.399922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T19:17:02.651900Z","caller":"traceutil/trace.go:171","msg":"trace[1989764075] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:552; }","duration":"266.44866ms","start":"2025-04-03T19:17:02.385444Z","end":"2025-04-03T19:17:02.651893Z","steps":["trace[1989764075] 'agreement among raft nodes before linearized reading'  (duration: 266.395789ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:03.213363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.367075ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16150073177975122828 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/kube-proxy\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/kube-proxy\" value_size:115 >> failure:<>>","response":"size:5"}
	{"level":"info","ts":"2025-04-03T19:17:03.213565Z","caller":"traceutil/trace.go:171","msg":"trace[1631560445] linearizableReadLoop","detail":"{readStateIndex:595; appliedIndex:594; }","duration":"553.380302ms","start":"2025-04-03T19:17:02.660168Z","end":"2025-04-03T19:17:03.213548Z","steps":["trace[1631560445] 'read index received'  (duration: 139.674535ms)","trace[1631560445] 'applied index is now lower than readState.Index'  (duration: 413.704116ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-03T19:17:03.213609Z","caller":"traceutil/trace.go:171","msg":"trace[1618896695] transaction","detail":"{read_only:false; number_of_response:0; response_revision:552; }","duration":"554.04049ms","start":"2025-04-03T19:17:02.659547Z","end":"2025-04-03T19:17:03.213587Z","steps":["trace[1618896695] 'process raft request'  (duration: 140.36257ms)","trace[1618896695] 'compare'  (duration: 413.267103ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-03T19:17:03.213714Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:17:02.659531Z","time spent":"554.144734ms","remote":"127.0.0.1:43552","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/kube-proxy\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/kube-proxy\" value_size:115 >> failure:<>"}
	{"level":"warn","ts":"2025-04-03T19:17:03.213789Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"553.566479ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-04-03T19:17:03.213839Z","caller":"traceutil/trace.go:171","msg":"trace[1288001019] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:552; }","duration":"553.683115ms","start":"2025-04-03T19:17:02.660147Z","end":"2025-04-03T19:17:03.213830Z","steps":["trace[1288001019] 'agreement among raft nodes before linearized reading'  (duration: 553.493016ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:03.213892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:17:02.660135Z","time spent":"553.746254ms","remote":"127.0.0.1:43552","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":218,"request content":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 "}
	{"level":"info","ts":"2025-04-03T19:17:03.342451Z","caller":"traceutil/trace.go:171","msg":"trace[1681559598] transaction","detail":"{read_only:false; number_of_response:0; response_revision:552; }","duration":"117.774713ms","start":"2025-04-03T19:17:03.224649Z","end":"2025-04-03T19:17:03.342423Z","steps":["trace[1681559598] 'process raft request'  (duration: 117.663563ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T19:17:03.564698Z","caller":"traceutil/trace.go:171","msg":"trace[1425395186] linearizableReadLoop","detail":"{readStateIndex:598; appliedIndex:597; }","duration":"135.038272ms","start":"2025-04-03T19:17:03.429643Z","end":"2025-04-03T19:17:03.564681Z","steps":["trace[1425395186] 'read index received'  (duration: 134.931835ms)","trace[1425395186] 'applied index is now lower than readState.Index'  (duration: 105.922µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-03T19:17:03.564834Z","caller":"traceutil/trace.go:171","msg":"trace[1392161603] transaction","detail":"{read_only:false; number_of_response:0; response_revision:552; }","duration":"136.54235ms","start":"2025-04-03T19:17:03.428283Z","end":"2025-04-03T19:17:03.564825Z","steps":["trace[1392161603] 'process raft request'  (duration: 136.337781ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:17:03.564908Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.243625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-04-03T19:17:03.565945Z","caller":"traceutil/trace.go:171","msg":"trace[156784200] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:552; }","duration":"136.312574ms","start":"2025-04-03T19:17:03.429619Z","end":"2025-04-03T19:17:03.565931Z","steps":["trace[156784200] 'agreement among raft nodes before linearized reading'  (duration: 135.234007ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T19:17:04.822997Z","caller":"traceutil/trace.go:171","msg":"trace[1402701582] linearizableReadLoop","detail":"{readStateIndex:600; appliedIndex:599; }","duration":"238.609036ms","start":"2025-04-03T19:17:04.584371Z","end":"2025-04-03T19:17:04.822980Z","steps":["trace[1402701582] 'read index received'  (duration: 154.707485ms)","trace[1402701582] 'applied index is now lower than readState.Index'  (duration: 83.900583ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-03T19:17:04.823233Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.855585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-bjbjm\" limit:1 ","response":"range_response_count:1 size:5149"}
	{"level":"info","ts":"2025-04-03T19:17:04.823281Z","caller":"traceutil/trace.go:171","msg":"trace[782465285] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-bjbjm; range_end:; response_count:1; response_revision:554; }","duration":"238.943664ms","start":"2025-04-03T19:17:04.584327Z","end":"2025-04-03T19:17:04.823270Z","steps":["trace[782465285] 'agreement among raft nodes before linearized reading'  (duration: 238.799822ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T19:17:04.823566Z","caller":"traceutil/trace.go:171","msg":"trace[666053067] transaction","detail":"{read_only:false; response_revision:554; number_of_response:1; }","duration":"289.19715ms","start":"2025-04-03T19:17:04.534357Z","end":"2025-04-03T19:17:04.823554Z","steps":["trace[666053067] 'process raft request'  (duration: 204.773168ms)","trace[666053067] 'compare'  (duration: 83.506418ms)"],"step_count":2}
	
	
	==> etcd [9a003c2044eb8d365c73424c3190d1a88beda7e1cec88f4a53f2bd5a001d5bcf] <==
	{"level":"warn","ts":"2025-04-03T19:16:37.188458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:16:36.664445Z","time spent":"524.012659ms","remote":"127.0.0.1:38202","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-04-03T19:16:37.188663Z","caller":"traceutil/trace.go:171","msg":"trace[731657256] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"517.489942ms","start":"2025-04-03T19:16:36.671161Z","end":"2025-04-03T19:16:37.188651Z","steps":["trace[731657256] 'process raft request'  (duration: 516.448738ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:16:37.188729Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:16:36.671141Z","time spent":"517.563776ms","remote":"127.0.0.1:37930","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-942912\" mod_revision:419 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-942912\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-942912\" > >"}
	{"level":"warn","ts":"2025-04-03T19:16:37.732131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"342.242837ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16150073177969010048 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:602095fd1568257f>","response":"size:41"}
	{"level":"info","ts":"2025-04-03T19:16:37.732384Z","caller":"traceutil/trace.go:171","msg":"trace[1369975838] linearizableReadLoop","detail":"{readStateIndex:449; appliedIndex:447; }","duration":"539.510461ms","start":"2025-04-03T19:16:37.192856Z","end":"2025-04-03T19:16:37.732367Z","steps":["trace[1369975838] 'read index received'  (duration: 196.977998ms)","trace[1369975838] 'applied index is now lower than readState.Index'  (duration: 342.531963ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-03T19:16:37.732470Z","caller":"traceutil/trace.go:171","msg":"trace[1788893950] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"541.11569ms","start":"2025-04-03T19:16:37.191336Z","end":"2025-04-03T19:16:37.732452Z","steps":["trace[1788893950] 'process raft request'  (duration: 540.943124ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:16:37.732553Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:16:37.191273Z","time spent":"541.235892ms","remote":"127.0.0.1:38202","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":534,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-942912.1832e500a3da667a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-942912.1832e500a3da667a\" value_size:462 lease:6926701141114234235 >> failure:<>"}
	{"level":"warn","ts":"2025-04-03T19:16:37.732731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"539.864482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-942912\" limit:1 ","response":"range_response_count:1 size:6988"}
	{"level":"info","ts":"2025-04-03T19:16:37.732821Z","caller":"traceutil/trace.go:171","msg":"trace[265576591] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-942912; range_end:; response_count:1; response_revision:425; }","duration":"540.012701ms","start":"2025-04-03T19:16:37.192798Z","end":"2025-04-03T19:16:37.732810Z","steps":["trace[265576591] 'agreement among raft nodes before linearized reading'  (duration: 539.689152ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:16:37.732892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:16:37.192787Z","time spent":"540.091964ms","remote":"127.0.0.1:37850","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":1,"response size":7011,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-942912\" limit:1 "}
	{"level":"warn","ts":"2025-04-03T19:16:37.732965Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.840271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:1 size:442"}
	{"level":"warn","ts":"2025-04-03T19:16:37.733202Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.301861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-03T19:16:37.733275Z","caller":"traceutil/trace.go:171","msg":"trace[710457659] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:425; }","duration":"255.402308ms","start":"2025-04-03T19:16:37.477865Z","end":"2025-04-03T19:16:37.733267Z","steps":["trace[710457659] 'agreement among raft nodes before linearized reading'  (duration: 255.309613ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-03T19:16:37.733230Z","caller":"traceutil/trace.go:171","msg":"trace[1613584348] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:425; }","duration":"258.129062ms","start":"2025-04-03T19:16:37.475091Z","end":"2025-04-03T19:16:37.733220Z","steps":["trace[1613584348] 'agreement among raft nodes before linearized reading'  (duration: 257.81234ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-03T19:16:37.732411Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-03T19:16:37.041305Z","time spent":"691.101767ms","remote":"127.0.0.1:37754","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-04-03T19:16:44.673134Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-03T19:16:44.673224Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-942912","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.237:2380"],"advertise-client-urls":["https://192.168.50.237:2379"]}
	{"level":"warn","ts":"2025-04-03T19:16:44.673310Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-03T19:16:44.673463Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-03T19:16:44.715890Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.237:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-03T19:16:44.715964Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.237:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-03T19:16:44.716092Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2850e06a3711e020","current-leader-member-id":"2850e06a3711e020"}
	{"level":"info","ts":"2025-04-03T19:16:44.721693Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.50.237:2380"}
	{"level":"info","ts":"2025-04-03T19:16:44.721881Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.50.237:2380"}
	{"level":"info","ts":"2025-04-03T19:16:44.721915Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-942912","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.237:2380"],"advertise-client-urls":["https://192.168.50.237:2379"]}
	
	
	==> kernel <==
	 19:17:26 up 2 min,  0 users,  load average: 1.30, 0.63, 0.24
	Linux pause-942912 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dcd3a9ff5b410375e037e5a18f476cdb7748487391a9f890d505541d84f81319] <==
	W0403 19:16:54.150334       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.165753       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.171243       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.175683       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.205463       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.249106       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.300703       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.333426       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.337777       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.372480       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.375933       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.400474       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.426458       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.487224       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.508811       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.516448       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.532742       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.548726       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.672929       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.704580       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.733480       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.749533       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.840931       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.841276       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0403 19:16:54.971418       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dce8b17a7294c3e97f6438e5ea654ffba48ee58a9c1f256bed8ed8e8485e4149] <==
	I0403 19:17:00.402972       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0403 19:17:00.403058       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0403 19:17:00.411811       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0403 19:17:00.411896       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0403 19:17:00.411996       1 aggregator.go:171] initial CRD sync complete...
	I0403 19:17:00.412083       1 autoregister_controller.go:144] Starting autoregister controller
	I0403 19:17:00.412104       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0403 19:17:00.412110       1 cache.go:39] Caches are synced for autoregister controller
	I0403 19:17:00.454839       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0403 19:17:00.454904       1 policy_source.go:240] refreshing policies
	I0403 19:17:00.491918       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0403 19:17:00.494002       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0403 19:17:00.494440       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0403 19:17:00.494614       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0403 19:17:00.519824       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0403 19:17:00.534694       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0403 19:17:00.590569       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0403 19:17:01.305694       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0403 19:17:01.803548       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0403 19:17:01.847477       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0403 19:17:03.348664       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0403 19:17:03.427454       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0403 19:17:05.051631       1 controller.go:615] quota admission added evaluator for: endpoints
	I0403 19:17:05.052630       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0403 19:17:05.056172       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [20e58d7ea0cc859964c7efdd5ab9138cec85489ca8099525bc20cb04635ac0ee] <==
	I0403 19:17:04.470105       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0403 19:17:04.476074       1 shared_informer.go:320] Caches are synced for disruption
	I0403 19:17:04.476163       1 shared_informer.go:320] Caches are synced for daemon sets
	I0403 19:17:04.476085       1 shared_informer.go:320] Caches are synced for garbage collector
	I0403 19:17:04.476246       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0403 19:17:04.476266       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0403 19:17:04.477877       1 shared_informer.go:320] Caches are synced for garbage collector
	I0403 19:17:04.477953       1 shared_informer.go:320] Caches are synced for deployment
	I0403 19:17:04.481046       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0403 19:17:04.481142       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.623µs"
	I0403 19:17:04.482812       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0403 19:17:04.486580       1 shared_informer.go:320] Caches are synced for taint
	I0403 19:17:04.486698       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0403 19:17:04.486781       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-942912"
	I0403 19:17:04.486925       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0403 19:17:04.492686       1 shared_informer.go:320] Caches are synced for namespace
	I0403 19:17:04.493932       1 shared_informer.go:320] Caches are synced for crt configmap
	I0403 19:17:04.499293       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0403 19:17:04.502672       1 shared_informer.go:320] Caches are synced for service account
	I0403 19:17:04.518944       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0403 19:17:04.524499       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0403 19:17:04.526436       1 shared_informer.go:320] Caches are synced for endpoint
	I0403 19:17:04.529547       1 shared_informer.go:320] Caches are synced for resource quota
	I0403 19:17:05.068225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="30.515718ms"
	I0403 19:17:05.068485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.779µs"
	
	
	==> kube-controller-manager [8d56aa312e1b69a3a716da6b6f4a9b5192a9cb2a4a2e3a675574c4870c1d80f6] <==
	I0403 19:16:39.790759       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-942912"
	I0403 19:16:39.790767       1 shared_informer.go:320] Caches are synced for resource quota
	I0403 19:16:39.792333       1 shared_informer.go:320] Caches are synced for resource quota
	I0403 19:16:39.792448       1 shared_informer.go:320] Caches are synced for persistent volume
	I0403 19:16:39.795675       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0403 19:16:39.799097       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0403 19:16:39.800400       1 shared_informer.go:320] Caches are synced for endpoint
	I0403 19:16:39.802499       1 shared_informer.go:320] Caches are synced for job
	I0403 19:16:39.806846       1 shared_informer.go:320] Caches are synced for garbage collector
	I0403 19:16:39.811138       1 shared_informer.go:320] Caches are synced for HPA
	I0403 19:16:39.814463       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0403 19:16:39.815672       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0403 19:16:39.816841       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0403 19:16:39.818031       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0403 19:16:39.832092       1 shared_informer.go:320] Caches are synced for taint
	I0403 19:16:39.832190       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0403 19:16:39.832263       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-942912"
	I0403 19:16:39.832306       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0403 19:16:39.832365       1 shared_informer.go:320] Caches are synced for disruption
	I0403 19:16:39.832801       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0403 19:16:39.833773       1 shared_informer.go:320] Caches are synced for crt configmap
	I0403 19:16:39.844868       1 shared_informer.go:320] Caches are synced for garbage collector
	I0403 19:16:39.844936       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0403 19:16:39.844946       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0403 19:16:44.584579       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="101.195µs"
	
	
	==> kube-proxy [aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0403 19:16:35.068796       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0403 19:16:36.614443       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.237"]
	E0403 19:16:36.614622       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0403 19:16:36.653317       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0403 19:16:36.653363       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0403 19:16:36.653386       1 server_linux.go:170] "Using iptables Proxier"
	I0403 19:16:36.655862       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0403 19:16:36.656226       1 server.go:497] "Version info" version="v1.32.2"
	I0403 19:16:36.656248       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0403 19:16:36.658303       1 config.go:199] "Starting service config controller"
	I0403 19:16:36.658356       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0403 19:16:36.658381       1 config.go:105] "Starting endpoint slice config controller"
	I0403 19:16:36.658385       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0403 19:16:36.659143       1 config.go:329] "Starting node config controller"
	I0403 19:16:36.659167       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0403 19:16:36.759233       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0403 19:16:36.759307       1 shared_informer.go:320] Caches are synced for service config
	I0403 19:16:36.759431       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [de5d6b0d898e42a7cb1ffba73cf384aaa0cf3314cf48b898c5a78b6cf0ecee4e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0403 19:17:01.096180       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0403 19:17:01.106405       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.237"]
	E0403 19:17:01.106528       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0403 19:17:01.144446       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0403 19:17:01.144520       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0403 19:17:01.144556       1 server_linux.go:170] "Using iptables Proxier"
	I0403 19:17:01.146800       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0403 19:17:01.147258       1 server.go:497] "Version info" version="v1.32.2"
	I0403 19:17:01.147303       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0403 19:17:01.148707       1 config.go:199] "Starting service config controller"
	I0403 19:17:01.148787       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0403 19:17:01.148822       1 config.go:105] "Starting endpoint slice config controller"
	I0403 19:17:01.148838       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0403 19:17:01.149417       1 config.go:329] "Starting node config controller"
	I0403 19:17:01.149454       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0403 19:17:01.249668       1 shared_informer.go:320] Caches are synced for node config
	I0403 19:17:01.249712       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0403 19:17:01.249785       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5b57989f55ed3f0edbdb7d30bf887d260ac751cb23c487dd7438a22192bfe44e] <==
	I0403 19:16:34.815813       1 serving.go:386] Generated self-signed cert in-memory
	I0403 19:16:36.624118       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0403 19:16:36.624163       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0403 19:16:37.192999       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0403 19:16:37.193168       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0403 19:16:37.194189       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0403 19:16:37.194227       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0403 19:16:37.194307       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0403 19:16:37.194384       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0403 19:16:37.195533       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0403 19:16:37.196115       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0403 19:16:37.293706       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0403 19:16:37.294982       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0403 19:16:37.295201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0403 19:16:44.385754       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0403 19:16:44.385900       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0403 19:16:44.386110       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0403 19:16:44.386131       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0403 19:16:44.386156       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	E0403 19:16:44.386638       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9a2909a93c6d79b7316517f6b5b20adf00529bb1c64ee5c47a3d711ad20edcf1] <==
	W0403 19:17:00.366762       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0403 19:17:00.367495       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.367592       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0403 19:17:00.367620       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.367675       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0403 19:17:00.367703       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.367758       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0403 19:17:00.368091       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.368168       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0403 19:17:00.368200       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.368274       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0403 19:17:00.370074       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.370191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0403 19:17:00.370236       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.370312       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0403 19:17:00.370342       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.370818       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0403 19:17:00.370909       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.371069       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0403 19:17:00.371140       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.371326       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0403 19:17:00.371415       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0403 19:17:00.371560       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0403 19:17:00.371651       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0403 19:17:06.597761       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.349827    3516 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: E0403 19:17:00.482877    3516 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-942912\" already exists" pod="kube-system/etcd-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.482916    3516 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: E0403 19:17:00.492667    3516 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-942912\" already exists" pod="kube-system/kube-apiserver-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.492707    3516 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: E0403 19:17:00.510259    3516 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-942912\" already exists" pod="kube-system/kube-controller-manager-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.510436    3516 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: E0403 19:17:00.518861    3516 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-942912\" already exists" pod="kube-system/kube-scheduler-pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.519942    3516 apiserver.go:52] "Watching apiserver"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.542814    3516 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.578674    3516 kubelet_node_status.go:125] "Node was previously registered" node="pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.578753    3516 kubelet_node_status.go:79] "Successfully registered node" node="pause-942912"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.578787    3516 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.580067    3516 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.582160    3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2538b1a-3d84-45ad-9f64-907d33b4a586-lib-modules\") pod \"kube-proxy-mqhzs\" (UID: \"b2538b1a-3d84-45ad-9f64-907d33b4a586\") " pod="kube-system/kube-proxy-mqhzs"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.582260    3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2538b1a-3d84-45ad-9f64-907d33b4a586-xtables-lock\") pod \"kube-proxy-mqhzs\" (UID: \"b2538b1a-3d84-45ad-9f64-907d33b4a586\") " pod="kube-system/kube-proxy-mqhzs"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.825916    3516 scope.go:117] "RemoveContainer" containerID="c189bb593727fda0d278e61787622644be373336f67baa29f4211036df765224"
	Apr 03 19:17:00 pause-942912 kubelet[3516]: I0403 19:17:00.826362    3516 scope.go:117] "RemoveContainer" containerID="aa5533e25d2340cfc9c488c27a949ecd6ecf4bc11b5aca4449a299f81238ab39"
	Apr 03 19:17:05 pause-942912 kubelet[3516]: I0403 19:17:05.021588    3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 03 19:17:06 pause-942912 kubelet[3516]: E0403 19:17:06.668689    3516 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707826667298666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 19:17:06 pause-942912 kubelet[3516]: E0403 19:17:06.668879    3516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707826667298666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 19:17:16 pause-942912 kubelet[3516]: E0403 19:17:16.671587    3516 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707836671320646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 19:17:16 pause-942912 kubelet[3516]: E0403 19:17:16.671648    3516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707836671320646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 19:17:26 pause-942912 kubelet[3516]: E0403 19:17:26.674454    3516 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707846672892676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 03 19:17:26 pause-942912 kubelet[3516]: E0403 19:17:26.674518    3516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743707846672892676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-942912 -n pause-942912
helpers_test.go:261: (dbg) Run:  kubectl --context pause-942912 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (86.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (287.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-471019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0403 19:19:09.255611   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-471019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m47.050402034s)

                                                
                                                
-- stdout --
	* [old-k8s-version-471019] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-471019" primary control-plane node in "old-k8s-version-471019" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 19:18:54.258625   62736 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:18:54.258726   62736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:18:54.258737   62736 out.go:358] Setting ErrFile to fd 2...
	I0403 19:18:54.258741   62736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:18:54.258973   62736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:18:54.259550   62736 out.go:352] Setting JSON to false
	I0403 19:18:54.260489   62736 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7279,"bootTime":1743700655,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:18:54.260596   62736 start.go:139] virtualization: kvm guest
	I0403 19:18:54.262332   62736 out.go:177] * [old-k8s-version-471019] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:18:54.263394   62736 notify.go:220] Checking for updates...
	I0403 19:18:54.263409   62736 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:18:54.264392   62736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:18:54.265494   62736 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:18:54.266446   62736 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:18:54.267435   62736 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:18:54.268485   62736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:18:54.269850   62736 config.go:182] Loaded profile config "cert-expiration-954352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:18:54.269977   62736 config.go:182] Loaded profile config "cert-options-528707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:18:54.270077   62736 config.go:182] Loaded profile config "kubernetes-upgrade-523797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:18:54.270190   62736 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:18:54.304778   62736 out.go:177] * Using the kvm2 driver based on user configuration
	I0403 19:18:54.305732   62736 start.go:297] selected driver: kvm2
	I0403 19:18:54.305745   62736 start.go:901] validating driver "kvm2" against <nil>
	I0403 19:18:54.305756   62736 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:18:54.306440   62736 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:18:54.306512   62736 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 19:18:54.323479   62736 install.go:137] /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0403 19:18:54.323524   62736 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0403 19:18:54.323747   62736 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:18:54.323776   62736 cni.go:84] Creating CNI manager for ""
	I0403 19:18:54.323814   62736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:18:54.323822   62736 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0403 19:18:54.323868   62736 start.go:340] cluster config:
	{Name:old-k8s-version-471019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-471019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:18:54.323974   62736 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:18:54.326102   62736 out.go:177] * Starting "old-k8s-version-471019" primary control-plane node in "old-k8s-version-471019" cluster
	I0403 19:18:54.327093   62736 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0403 19:18:54.327133   62736 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0403 19:18:54.327143   62736 cache.go:56] Caching tarball of preloaded images
	I0403 19:18:54.327231   62736 preload.go:172] Found /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0403 19:18:54.327246   62736 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0403 19:18:54.327350   62736 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/config.json ...
	I0403 19:18:54.327369   62736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/config.json: {Name:mk0eeced6959f2a235aacee4e625c59116f4b72c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:18:54.327516   62736 start.go:360] acquireMachinesLock for old-k8s-version-471019: {Name:mk8972215f0ab94ca7966bf5adf18262e19bccd0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0403 19:19:12.587659   62736 start.go:364] duration metric: took 18.260092887s to acquireMachinesLock for "old-k8s-version-471019"
	I0403 19:19:12.587754   62736 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-471019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-471019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:19:12.587903   62736 start.go:125] createHost starting for "" (driver="kvm2")
	I0403 19:19:12.589348   62736 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0403 19:19:12.589559   62736 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:19:12.589624   62736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:19:12.607183   62736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0403 19:19:12.607549   62736 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:19:12.608140   62736 main.go:141] libmachine: Using API Version  1
	I0403 19:19:12.608169   62736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:19:12.608471   62736 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:19:12.608658   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetMachineName
	I0403 19:19:12.608796   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:19:12.608945   62736 start.go:159] libmachine.API.Create for "old-k8s-version-471019" (driver="kvm2")
	I0403 19:19:12.608976   62736 client.go:168] LocalClient.Create starting
	I0403 19:19:12.609008   62736 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem
	I0403 19:19:12.609049   62736 main.go:141] libmachine: Decoding PEM data...
	I0403 19:19:12.609073   62736 main.go:141] libmachine: Parsing certificate...
	I0403 19:19:12.609157   62736 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem
	I0403 19:19:12.609198   62736 main.go:141] libmachine: Decoding PEM data...
	I0403 19:19:12.609220   62736 main.go:141] libmachine: Parsing certificate...
	I0403 19:19:12.609246   62736 main.go:141] libmachine: Running pre-create checks...
	I0403 19:19:12.609259   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .PreCreateCheck
	I0403 19:19:12.609537   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetConfigRaw
	I0403 19:19:12.609969   62736 main.go:141] libmachine: Creating machine...
	I0403 19:19:12.609984   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .Create
	I0403 19:19:12.610092   62736 main.go:141] libmachine: (old-k8s-version-471019) creating KVM machine...
	I0403 19:19:12.610110   62736 main.go:141] libmachine: (old-k8s-version-471019) creating network...
	I0403 19:19:12.611371   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found existing default KVM network
	I0403 19:19:12.612770   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:12.612600   62860 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:65:f1:53} reservation:<nil>}
	I0403 19:19:12.613912   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:12.613811   62860 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:df:83:7a} reservation:<nil>}
	I0403 19:19:12.615311   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:12.615214   62860 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002811b0}
	I0403 19:19:12.615347   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | created network xml: 
	I0403 19:19:12.615359   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | <network>
	I0403 19:19:12.615370   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG |   <name>mk-old-k8s-version-471019</name>
	I0403 19:19:12.615378   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG |   <dns enable='no'/>
	I0403 19:19:12.615385   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG |   
	I0403 19:19:12.615402   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0403 19:19:12.615410   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG |     <dhcp>
	I0403 19:19:12.615420   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0403 19:19:12.615427   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG |     </dhcp>
	I0403 19:19:12.615435   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG |   </ip>
	I0403 19:19:12.615441   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG |   
	I0403 19:19:12.615449   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | </network>
	I0403 19:19:12.615455   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | 
	I0403 19:19:12.620292   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | trying to create private KVM network mk-old-k8s-version-471019 192.168.61.0/24...
	I0403 19:19:12.704480   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | private KVM network mk-old-k8s-version-471019 192.168.61.0/24 created
	I0403 19:19:12.704514   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:12.704448   62860 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:19:12.704538   62736 main.go:141] libmachine: (old-k8s-version-471019) setting up store path in /home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019 ...
	I0403 19:19:12.704553   62736 main.go:141] libmachine: (old-k8s-version-471019) building disk image from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0403 19:19:12.704641   62736 main.go:141] libmachine: (old-k8s-version-471019) Downloading /home/jenkins/minikube-integration/20591-14371/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0403 19:19:12.999564   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:12.999411   62860 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa...
	I0403 19:19:13.106693   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:13.106549   62860 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/old-k8s-version-471019.rawdisk...
	I0403 19:19:13.106742   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | Writing magic tar header
	I0403 19:19:13.106761   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | Writing SSH key tar header
	I0403 19:19:13.106775   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:13.106654   62860 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019 ...
	I0403 19:19:13.106790   62736 main.go:141] libmachine: (old-k8s-version-471019) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019 (perms=drwx------)
	I0403 19:19:13.106807   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019
	I0403 19:19:13.106832   62736 main.go:141] libmachine: (old-k8s-version-471019) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines (perms=drwxr-xr-x)
	I0403 19:19:13.106849   62736 main.go:141] libmachine: (old-k8s-version-471019) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube (perms=drwxr-xr-x)
	I0403 19:19:13.106871   62736 main.go:141] libmachine: (old-k8s-version-471019) setting executable bit set on /home/jenkins/minikube-integration/20591-14371 (perms=drwxrwxr-x)
	I0403 19:19:13.106894   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines
	I0403 19:19:13.106908   62736 main.go:141] libmachine: (old-k8s-version-471019) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0403 19:19:13.106947   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:19:13.106969   62736 main.go:141] libmachine: (old-k8s-version-471019) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0403 19:19:13.106978   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371
	I0403 19:19:13.106992   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0403 19:19:13.107001   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | checking permissions on dir: /home/jenkins
	I0403 19:19:13.107010   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | checking permissions on dir: /home
	I0403 19:19:13.107020   62736 main.go:141] libmachine: (old-k8s-version-471019) creating domain...
	I0403 19:19:13.107027   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | skipping /home - not owner
	I0403 19:19:13.108312   62736 main.go:141] libmachine: (old-k8s-version-471019) define libvirt domain using xml: 
	I0403 19:19:13.108337   62736 main.go:141] libmachine: (old-k8s-version-471019) <domain type='kvm'>
	I0403 19:19:13.108363   62736 main.go:141] libmachine: (old-k8s-version-471019)   <name>old-k8s-version-471019</name>
	I0403 19:19:13.108377   62736 main.go:141] libmachine: (old-k8s-version-471019)   <memory unit='MiB'>2200</memory>
	I0403 19:19:13.108387   62736 main.go:141] libmachine: (old-k8s-version-471019)   <vcpu>2</vcpu>
	I0403 19:19:13.108395   62736 main.go:141] libmachine: (old-k8s-version-471019)   <features>
	I0403 19:19:13.108405   62736 main.go:141] libmachine: (old-k8s-version-471019)     <acpi/>
	I0403 19:19:13.108418   62736 main.go:141] libmachine: (old-k8s-version-471019)     <apic/>
	I0403 19:19:13.108428   62736 main.go:141] libmachine: (old-k8s-version-471019)     <pae/>
	I0403 19:19:13.108436   62736 main.go:141] libmachine: (old-k8s-version-471019)     
	I0403 19:19:13.108450   62736 main.go:141] libmachine: (old-k8s-version-471019)   </features>
	I0403 19:19:13.108459   62736 main.go:141] libmachine: (old-k8s-version-471019)   <cpu mode='host-passthrough'>
	I0403 19:19:13.108471   62736 main.go:141] libmachine: (old-k8s-version-471019)   
	I0403 19:19:13.108479   62736 main.go:141] libmachine: (old-k8s-version-471019)   </cpu>
	I0403 19:19:13.108489   62736 main.go:141] libmachine: (old-k8s-version-471019)   <os>
	I0403 19:19:13.108508   62736 main.go:141] libmachine: (old-k8s-version-471019)     <type>hvm</type>
	I0403 19:19:13.108517   62736 main.go:141] libmachine: (old-k8s-version-471019)     <boot dev='cdrom'/>
	I0403 19:19:13.108525   62736 main.go:141] libmachine: (old-k8s-version-471019)     <boot dev='hd'/>
	I0403 19:19:13.108535   62736 main.go:141] libmachine: (old-k8s-version-471019)     <bootmenu enable='no'/>
	I0403 19:19:13.108543   62736 main.go:141] libmachine: (old-k8s-version-471019)   </os>
	I0403 19:19:13.108552   62736 main.go:141] libmachine: (old-k8s-version-471019)   <devices>
	I0403 19:19:13.108561   62736 main.go:141] libmachine: (old-k8s-version-471019)     <disk type='file' device='cdrom'>
	I0403 19:19:13.108575   62736 main.go:141] libmachine: (old-k8s-version-471019)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/boot2docker.iso'/>
	I0403 19:19:13.108594   62736 main.go:141] libmachine: (old-k8s-version-471019)       <target dev='hdc' bus='scsi'/>
	I0403 19:19:13.108603   62736 main.go:141] libmachine: (old-k8s-version-471019)       <readonly/>
	I0403 19:19:13.108611   62736 main.go:141] libmachine: (old-k8s-version-471019)     </disk>
	I0403 19:19:13.108620   62736 main.go:141] libmachine: (old-k8s-version-471019)     <disk type='file' device='disk'>
	I0403 19:19:13.108630   62736 main.go:141] libmachine: (old-k8s-version-471019)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0403 19:19:13.108644   62736 main.go:141] libmachine: (old-k8s-version-471019)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/old-k8s-version-471019.rawdisk'/>
	I0403 19:19:13.108658   62736 main.go:141] libmachine: (old-k8s-version-471019)       <target dev='hda' bus='virtio'/>
	I0403 19:19:13.108668   62736 main.go:141] libmachine: (old-k8s-version-471019)     </disk>
	I0403 19:19:13.108686   62736 main.go:141] libmachine: (old-k8s-version-471019)     <interface type='network'>
	I0403 19:19:13.108698   62736 main.go:141] libmachine: (old-k8s-version-471019)       <source network='mk-old-k8s-version-471019'/>
	I0403 19:19:13.108706   62736 main.go:141] libmachine: (old-k8s-version-471019)       <model type='virtio'/>
	I0403 19:19:13.108722   62736 main.go:141] libmachine: (old-k8s-version-471019)     </interface>
	I0403 19:19:13.108736   62736 main.go:141] libmachine: (old-k8s-version-471019)     <interface type='network'>
	I0403 19:19:13.108746   62736 main.go:141] libmachine: (old-k8s-version-471019)       <source network='default'/>
	I0403 19:19:13.108754   62736 main.go:141] libmachine: (old-k8s-version-471019)       <model type='virtio'/>
	I0403 19:19:13.108762   62736 main.go:141] libmachine: (old-k8s-version-471019)     </interface>
	I0403 19:19:13.108770   62736 main.go:141] libmachine: (old-k8s-version-471019)     <serial type='pty'>
	I0403 19:19:13.108779   62736 main.go:141] libmachine: (old-k8s-version-471019)       <target port='0'/>
	I0403 19:19:13.108786   62736 main.go:141] libmachine: (old-k8s-version-471019)     </serial>
	I0403 19:19:13.108796   62736 main.go:141] libmachine: (old-k8s-version-471019)     <console type='pty'>
	I0403 19:19:13.108810   62736 main.go:141] libmachine: (old-k8s-version-471019)       <target type='serial' port='0'/>
	I0403 19:19:13.108819   62736 main.go:141] libmachine: (old-k8s-version-471019)     </console>
	I0403 19:19:13.108827   62736 main.go:141] libmachine: (old-k8s-version-471019)     <rng model='virtio'>
	I0403 19:19:13.108840   62736 main.go:141] libmachine: (old-k8s-version-471019)       <backend model='random'>/dev/random</backend>
	I0403 19:19:13.108848   62736 main.go:141] libmachine: (old-k8s-version-471019)     </rng>
	I0403 19:19:13.108857   62736 main.go:141] libmachine: (old-k8s-version-471019)     
	I0403 19:19:13.108864   62736 main.go:141] libmachine: (old-k8s-version-471019)     
	I0403 19:19:13.108897   62736 main.go:141] libmachine: (old-k8s-version-471019)   </devices>
	I0403 19:19:13.108919   62736 main.go:141] libmachine: (old-k8s-version-471019) </domain>
	I0403 19:19:13.108936   62736 main.go:141] libmachine: (old-k8s-version-471019) 
	I0403 19:19:13.113704   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:2a:32:f5 in network default
	I0403 19:19:13.115408   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:13.115439   62736 main.go:141] libmachine: (old-k8s-version-471019) starting domain...
	I0403 19:19:13.115453   62736 main.go:141] libmachine: (old-k8s-version-471019) ensuring networks are active...
	I0403 19:19:13.116321   62736 main.go:141] libmachine: (old-k8s-version-471019) Ensuring network default is active
	I0403 19:19:13.116697   62736 main.go:141] libmachine: (old-k8s-version-471019) Ensuring network mk-old-k8s-version-471019 is active
	I0403 19:19:13.117336   62736 main.go:141] libmachine: (old-k8s-version-471019) getting domain XML...
	I0403 19:19:13.118355   62736 main.go:141] libmachine: (old-k8s-version-471019) creating domain...
	I0403 19:19:14.691520   62736 main.go:141] libmachine: (old-k8s-version-471019) waiting for IP...
	I0403 19:19:14.692369   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:14.692903   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:14.692955   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:14.692879   62860 retry.go:31] will retry after 261.722569ms: waiting for domain to come up
	I0403 19:19:14.956727   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:14.957323   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:14.957406   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:14.957330   62860 retry.go:31] will retry after 295.135269ms: waiting for domain to come up
	I0403 19:19:15.254079   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:15.254696   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:15.254725   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:15.254673   62860 retry.go:31] will retry after 460.278639ms: waiting for domain to come up
	I0403 19:19:15.716056   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:15.716587   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:15.716608   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:15.716566   62860 retry.go:31] will retry after 371.218267ms: waiting for domain to come up
	I0403 19:19:16.088949   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:16.089515   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:16.089542   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:16.089477   62860 retry.go:31] will retry after 635.705018ms: waiting for domain to come up
	I0403 19:19:16.727580   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:16.728158   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:16.728195   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:16.728124   62860 retry.go:31] will retry after 602.471996ms: waiting for domain to come up
	I0403 19:19:17.331945   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:17.332510   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:17.332539   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:17.332492   62860 retry.go:31] will retry after 761.071912ms: waiting for domain to come up
	I0403 19:19:18.095667   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:18.096132   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:18.096157   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:18.096110   62860 retry.go:31] will retry after 1.423959912s: waiting for domain to come up
	I0403 19:19:19.521892   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:19.522362   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:19.522417   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:19.522357   62860 retry.go:31] will retry after 1.840778021s: waiting for domain to come up
	I0403 19:19:21.365048   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:21.365536   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:21.365564   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:21.365509   62860 retry.go:31] will retry after 1.397058601s: waiting for domain to come up
	I0403 19:19:22.764782   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:22.765369   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:22.765395   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:22.765333   62860 retry.go:31] will retry after 2.43355181s: waiting for domain to come up
	I0403 19:19:25.200901   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:25.201310   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:25.201375   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:25.201278   62860 retry.go:31] will retry after 3.423338921s: waiting for domain to come up
	I0403 19:19:28.626779   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:28.627352   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:28.627379   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:28.627316   62860 retry.go:31] will retry after 3.071967317s: waiting for domain to come up
	I0403 19:19:31.702571   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:31.703040   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:19:31.703069   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:19:31.703011   62860 retry.go:31] will retry after 4.304834953s: waiting for domain to come up
	I0403 19:19:36.011699   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.012197   62736 main.go:141] libmachine: (old-k8s-version-471019) found domain IP: 192.168.61.209
	I0403 19:19:36.012218   62736 main.go:141] libmachine: (old-k8s-version-471019) reserving static IP address...
	I0403 19:19:36.012246   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has current primary IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.012616   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-471019", mac: "52:54:00:0f:96:04", ip: "192.168.61.209"} in network mk-old-k8s-version-471019
	I0403 19:19:36.088380   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | Getting to WaitForSSH function...
	I0403 19:19:36.088411   62736 main.go:141] libmachine: (old-k8s-version-471019) reserved static IP address 192.168.61.209 for domain old-k8s-version-471019
	I0403 19:19:36.088426   62736 main.go:141] libmachine: (old-k8s-version-471019) waiting for SSH...
	I0403 19:19:36.091598   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.092004   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:36.092033   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.092155   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | Using SSH client type: external
	I0403 19:19:36.092197   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | Using SSH private key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa (-rw-------)
	I0403 19:19:36.092253   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0403 19:19:36.092279   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | About to run SSH command:
	I0403 19:19:36.092296   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | exit 0
	I0403 19:19:36.222605   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | SSH cmd err, output: <nil>: 
	I0403 19:19:36.222985   62736 main.go:141] libmachine: (old-k8s-version-471019) KVM machine creation complete
	I0403 19:19:36.223277   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetConfigRaw
	I0403 19:19:36.223792   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:19:36.223961   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:19:36.224103   62736 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0403 19:19:36.224118   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetState
	I0403 19:19:36.225384   62736 main.go:141] libmachine: Detecting operating system of created instance...
	I0403 19:19:36.225398   62736 main.go:141] libmachine: Waiting for SSH to be available...
	I0403 19:19:36.225403   62736 main.go:141] libmachine: Getting to WaitForSSH function...
	I0403 19:19:36.225408   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:19:36.227671   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.228101   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:36.228133   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.228294   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:19:36.228441   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:36.228558   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:36.228689   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:19:36.228813   62736 main.go:141] libmachine: Using SSH client type: native
	I0403 19:19:36.229016   62736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0403 19:19:36.229026   62736 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0403 19:19:36.338066   62736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:19:36.338096   62736 main.go:141] libmachine: Detecting the provisioner...
	I0403 19:19:36.338107   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:19:36.341195   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.341619   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:36.341648   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.341817   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:19:36.342026   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:36.342195   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:36.342330   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:19:36.342510   62736 main.go:141] libmachine: Using SSH client type: native
	I0403 19:19:36.342799   62736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0403 19:19:36.342817   62736 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0403 19:19:36.459126   62736 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0403 19:19:36.459248   62736 main.go:141] libmachine: found compatible host: buildroot
	I0403 19:19:36.459264   62736 main.go:141] libmachine: Provisioning with buildroot...
	I0403 19:19:36.459274   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetMachineName
	I0403 19:19:36.459496   62736 buildroot.go:166] provisioning hostname "old-k8s-version-471019"
	I0403 19:19:36.459515   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetMachineName
	I0403 19:19:36.459768   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:19:36.462249   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.462569   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:36.462615   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.462701   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:19:36.462885   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:36.463025   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:36.463178   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:19:36.463368   62736 main.go:141] libmachine: Using SSH client type: native
	I0403 19:19:36.463620   62736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0403 19:19:36.463637   62736 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-471019 && echo "old-k8s-version-471019" | sudo tee /etc/hostname
	I0403 19:19:36.594624   62736 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-471019
	
	I0403 19:19:36.594650   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:19:36.597591   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.598001   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:36.598030   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.598251   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:19:36.598403   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:36.598517   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:36.598623   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:19:36.598878   62736 main.go:141] libmachine: Using SSH client type: native
	I0403 19:19:36.599131   62736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0403 19:19:36.599149   62736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-471019' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-471019/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-471019' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0403 19:19:36.727043   62736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:19:36.727071   62736 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20591-14371/.minikube CaCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20591-14371/.minikube}
	I0403 19:19:36.727092   62736 buildroot.go:174] setting up certificates
	I0403 19:19:36.727104   62736 provision.go:84] configureAuth start
	I0403 19:19:36.727116   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetMachineName
	I0403 19:19:36.727383   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetIP
	I0403 19:19:36.730221   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.730625   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:36.730667   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.730949   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:19:36.733877   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.734338   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:36.734383   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.734542   62736 provision.go:143] copyHostCerts
	I0403 19:19:36.734608   62736 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem, removing ...
	I0403 19:19:36.734629   62736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem
	I0403 19:19:36.734692   62736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem (1082 bytes)
	I0403 19:19:36.734849   62736 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem, removing ...
	I0403 19:19:36.734862   62736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem
	I0403 19:19:36.734898   62736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem (1123 bytes)
	I0403 19:19:36.735003   62736 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem, removing ...
	I0403 19:19:36.735015   62736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem
	I0403 19:19:36.735042   62736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem (1675 bytes)
	I0403 19:19:36.735133   62736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-471019 san=[127.0.0.1 192.168.61.209 localhost minikube old-k8s-version-471019]
	I0403 19:19:36.872444   62736 provision.go:177] copyRemoteCerts
	I0403 19:19:36.872497   62736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0403 19:19:36.872519   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:19:36.875432   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.875813   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:36.875852   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:36.876027   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:19:36.876228   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:36.876395   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:19:36.876549   62736 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa Username:docker}
	I0403 19:19:36.961154   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0403 19:19:36.984335   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0403 19:19:37.007310   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0403 19:19:37.033403   62736 provision.go:87] duration metric: took 306.284004ms to configureAuth
	I0403 19:19:37.033437   62736 buildroot.go:189] setting minikube options for container-runtime
	I0403 19:19:37.033657   62736 config.go:182] Loaded profile config "old-k8s-version-471019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0403 19:19:37.033749   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:19:37.037112   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.037522   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:37.037556   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.037788   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:19:37.038004   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:37.038165   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:37.038343   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:19:37.038552   62736 main.go:141] libmachine: Using SSH client type: native
	I0403 19:19:37.038842   62736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0403 19:19:37.038864   62736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0403 19:19:37.284046   62736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0403 19:19:37.284074   62736 main.go:141] libmachine: Checking connection to Docker...
	I0403 19:19:37.284101   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetURL
	I0403 19:19:37.285445   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | using libvirt version 6000000
	I0403 19:19:37.288566   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.288981   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:37.289029   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.289164   62736 main.go:141] libmachine: Docker is up and running!
	I0403 19:19:37.289178   62736 main.go:141] libmachine: Reticulating splines...
	I0403 19:19:37.289186   62736 client.go:171] duration metric: took 24.680201956s to LocalClient.Create
	I0403 19:19:37.289207   62736 start.go:167] duration metric: took 24.680266539s to libmachine.API.Create "old-k8s-version-471019"
	I0403 19:19:37.289222   62736 start.go:293] postStartSetup for "old-k8s-version-471019" (driver="kvm2")
	I0403 19:19:37.289246   62736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0403 19:19:37.289262   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:19:37.289507   62736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0403 19:19:37.289537   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:19:37.291892   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.292243   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:37.292273   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.292486   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:19:37.292642   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:37.292782   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:19:37.292891   62736 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa Username:docker}
	I0403 19:19:37.377604   62736 ssh_runner.go:195] Run: cat /etc/os-release
	I0403 19:19:37.381468   62736 info.go:137] Remote host: Buildroot 2023.02.9
	I0403 19:19:37.381491   62736 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/addons for local assets ...
	I0403 19:19:37.381554   62736 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/files for local assets ...
	I0403 19:19:37.381667   62736 filesync.go:149] local asset: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem -> 215522.pem in /etc/ssl/certs
	I0403 19:19:37.381788   62736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0403 19:19:37.391117   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:19:37.413946   62736 start.go:296] duration metric: took 124.702106ms for postStartSetup
	I0403 19:19:37.414000   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetConfigRaw
	I0403 19:19:37.414689   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetIP
	I0403 19:19:37.417377   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.417723   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:37.417761   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.417934   62736 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/config.json ...
	I0403 19:19:37.418114   62736 start.go:128] duration metric: took 24.830198479s to createHost
	I0403 19:19:37.418136   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:19:37.420313   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.420717   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:37.420753   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.420861   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:19:37.421046   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:37.421224   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:37.421411   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:19:37.421620   62736 main.go:141] libmachine: Using SSH client type: native
	I0403 19:19:37.421815   62736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0403 19:19:37.421829   62736 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0403 19:19:37.535141   62736 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743707977.509463372
	
	I0403 19:19:37.535166   62736 fix.go:216] guest clock: 1743707977.509463372
	I0403 19:19:37.535175   62736 fix.go:229] Guest: 2025-04-03 19:19:37.509463372 +0000 UTC Remote: 2025-04-03 19:19:37.418125203 +0000 UTC m=+43.194831141 (delta=91.338169ms)
	I0403 19:19:37.535224   62736 fix.go:200] guest clock delta is within tolerance: 91.338169ms
	I0403 19:19:37.535235   62736 start.go:83] releasing machines lock for "old-k8s-version-471019", held for 24.947526927s
	I0403 19:19:37.535269   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:19:37.535508   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetIP
	I0403 19:19:37.538604   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.538931   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:37.538961   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.539096   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:19:37.539588   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:19:37.539751   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:19:37.539835   62736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0403 19:19:37.539874   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:19:37.539926   62736 ssh_runner.go:195] Run: cat /version.json
	I0403 19:19:37.539946   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:19:37.542902   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.542953   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.543212   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:37.543239   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.543449   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:19:37.543480   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:37.543504   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:37.543668   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:37.543761   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:19:37.543833   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:19:37.543911   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:19:37.543987   62736 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa Username:docker}
	I0403 19:19:37.544029   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:19:37.544176   62736 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa Username:docker}
	I0403 19:19:37.663260   62736 ssh_runner.go:195] Run: systemctl --version
	I0403 19:19:37.671337   62736 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0403 19:19:37.853243   62736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0403 19:19:37.858700   62736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0403 19:19:37.858764   62736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0403 19:19:37.873805   62736 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0403 19:19:37.873825   62736 start.go:495] detecting cgroup driver to use...
	I0403 19:19:37.873891   62736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0403 19:19:37.891518   62736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0403 19:19:37.905942   62736 docker.go:217] disabling cri-docker service (if available) ...
	I0403 19:19:37.906019   62736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0403 19:19:37.921994   62736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0403 19:19:37.936310   62736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0403 19:19:38.111896   62736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0403 19:19:38.330608   62736 docker.go:233] disabling docker service ...
	I0403 19:19:38.330683   62736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0403 19:19:38.354970   62736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0403 19:19:38.366959   62736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0403 19:19:38.492145   62736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0403 19:19:38.616615   62736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0403 19:19:38.630049   62736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0403 19:19:38.646986   62736 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0403 19:19:38.647049   62736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:19:38.656393   62736 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0403 19:19:38.656442   62736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:19:38.665571   62736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:19:38.675039   62736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:19:38.684392   62736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0403 19:19:38.693658   62736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0403 19:19:38.701937   62736 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0403 19:19:38.701986   62736 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0403 19:19:38.713128   62736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0403 19:19:38.721844   62736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:19:38.840163   62736 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0403 19:19:38.940722   62736 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0403 19:19:38.940780   62736 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0403 19:19:38.945677   62736 start.go:563] Will wait 60s for crictl version
	I0403 19:19:38.945723   62736 ssh_runner.go:195] Run: which crictl
	I0403 19:19:38.949359   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0403 19:19:38.993178   62736 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0403 19:19:38.993280   62736 ssh_runner.go:195] Run: crio --version
	I0403 19:19:39.022496   62736 ssh_runner.go:195] Run: crio --version
	I0403 19:19:39.051437   62736 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0403 19:19:39.052384   62736 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetIP
	I0403 19:19:39.055524   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:39.055889   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:19:27 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:19:39.055924   62736 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:19:39.056165   62736 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0403 19:19:39.059965   62736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:19:39.072756   62736 kubeadm.go:883] updating cluster {Name:old-k8s-version-471019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-471019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0403 19:19:39.072846   62736 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0403 19:19:39.072917   62736 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:19:39.107514   62736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0403 19:19:39.107587   62736 ssh_runner.go:195] Run: which lz4
	I0403 19:19:39.113196   62736 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0403 19:19:39.117257   62736 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0403 19:19:39.117288   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0403 19:19:40.539940   62736 crio.go:462] duration metric: took 1.426780883s to copy over tarball
	I0403 19:19:40.540009   62736 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0403 19:19:43.281012   62736 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.74096932s)
	I0403 19:19:43.281043   62736 crio.go:469] duration metric: took 2.741081716s to extract the tarball
	I0403 19:19:43.281050   62736 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0403 19:19:43.322869   62736 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:19:43.371355   62736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0403 19:19:43.371378   62736 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0403 19:19:43.371447   62736 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:19:43.371456   62736 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:19:43.371486   62736 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:19:43.371549   62736 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0403 19:19:43.371607   62736 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0403 19:19:43.371701   62736 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0403 19:19:43.371706   62736 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:19:43.371554   62736 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:19:43.372813   62736 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:19:43.372828   62736 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0403 19:19:43.372842   62736 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:19:43.372847   62736 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:19:43.372820   62736 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0403 19:19:43.372877   62736 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0403 19:19:43.372869   62736 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:19:43.373015   62736 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:19:43.548491   62736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:19:43.574594   62736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0403 19:19:43.588305   62736 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0403 19:19:43.588343   62736 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:19:43.588380   62736 ssh_runner.go:195] Run: which crictl
	I0403 19:19:43.605273   62736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:19:43.630770   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:19:43.630867   62736 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0403 19:19:43.630929   62736 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0403 19:19:43.630979   62736 ssh_runner.go:195] Run: which crictl
	I0403 19:19:43.634168   62736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:19:43.643421   62736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:19:43.663089   62736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0403 19:19:43.666374   62736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0403 19:19:43.674163   62736 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0403 19:19:43.674210   62736 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:19:43.674256   62736 ssh_runner.go:195] Run: which crictl
	I0403 19:19:43.705351   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0403 19:19:43.705411   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:19:43.749726   62736 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0403 19:19:43.749773   62736 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:19:43.749820   62736 ssh_runner.go:195] Run: which crictl
	I0403 19:19:43.772880   62736 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0403 19:19:43.772923   62736 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:19:43.772969   62736 ssh_runner.go:195] Run: which crictl
	I0403 19:19:43.787839   62736 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0403 19:19:43.787900   62736 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0403 19:19:43.787950   62736 ssh_runner.go:195] Run: which crictl
	I0403 19:19:43.818966   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:19:43.819015   62736 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0403 19:19:43.819046   62736 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0403 19:19:43.819047   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:19:43.819088   62736 ssh_runner.go:195] Run: which crictl
	I0403 19:19:43.822409   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0403 19:19:43.822472   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:19:43.822504   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:19:43.822506   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0403 19:19:43.940908   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0403 19:19:43.940978   62736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0403 19:19:43.941043   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:19:43.954515   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0403 19:19:43.965374   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:19:43.965428   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0403 19:19:43.965515   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:19:44.064778   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0403 19:19:44.064825   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:19:44.064855   62736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0403 19:19:44.087976   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0403 19:19:44.087977   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:19:44.088035   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:19:44.175391   62736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0403 19:19:44.175434   62736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0403 19:19:44.202391   62736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0403 19:19:44.202469   62736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0403 19:19:44.202469   62736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0403 19:19:44.225388   62736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0403 19:19:44.620062   62736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:19:44.765012   62736 cache_images.go:92] duration metric: took 1.393617146s to LoadCachedImages
	W0403 19:19:44.765136   62736 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0403 19:19:44.765158   62736 kubeadm.go:934] updating node { 192.168.61.209 8443 v1.20.0 crio true true} ...
	I0403 19:19:44.765268   62736 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-471019 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-471019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0403 19:19:44.765348   62736 ssh_runner.go:195] Run: crio config
	I0403 19:19:44.819937   62736 cni.go:84] Creating CNI manager for ""
	I0403 19:19:44.819960   62736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:19:44.819974   62736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0403 19:19:44.820000   62736 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.209 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-471019 NodeName:old-k8s-version-471019 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0403 19:19:44.820138   62736 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-471019"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0403 19:19:44.820196   62736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0403 19:19:44.829938   62736 binaries.go:44] Found k8s binaries, skipping transfer
	I0403 19:19:44.830007   62736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0403 19:19:44.839514   62736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0403 19:19:44.857180   62736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0403 19:19:44.872722   62736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0403 19:19:44.888219   62736 ssh_runner.go:195] Run: grep 192.168.61.209	control-plane.minikube.internal$ /etc/hosts
	I0403 19:19:44.891696   62736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:19:44.902916   62736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:19:45.023907   62736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:19:45.040547   62736 certs.go:68] Setting up /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019 for IP: 192.168.61.209
	I0403 19:19:45.040570   62736 certs.go:194] generating shared ca certs ...
	I0403 19:19:45.040591   62736 certs.go:226] acquiring lock for ca certs: {Name:mk34f09724355b828d31eb1ee97771fd3b4645c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:19:45.040754   62736 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key
	I0403 19:19:45.040820   62736 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key
	I0403 19:19:45.040835   62736 certs.go:256] generating profile certs ...
	I0403 19:19:45.040906   62736 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/client.key
	I0403 19:19:45.040946   62736 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/client.crt with IP's: []
	I0403 19:19:45.140861   62736 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/client.crt ...
	I0403 19:19:45.140892   62736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/client.crt: {Name:mke3f109960399f05c6c31997b5fa78b28e53105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:19:45.141070   62736 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/client.key ...
	I0403 19:19:45.141086   62736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/client.key: {Name:mk4f8976e9dc26f00150b9613022298b7a16423f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:19:45.141191   62736 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.key.6f94e3bf
	I0403 19:19:45.141213   62736 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.crt.6f94e3bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.209]
	I0403 19:19:45.311019   62736 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.crt.6f94e3bf ...
	I0403 19:19:45.311054   62736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.crt.6f94e3bf: {Name:mk998f06e06a3a2bb010058aae0d91a1d90b7bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:19:45.311269   62736 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.key.6f94e3bf ...
	I0403 19:19:45.311291   62736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.key.6f94e3bf: {Name:mk9ea176363b6be5859042bbe52c3f4a8a862054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:19:45.311421   62736 certs.go:381] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.crt.6f94e3bf -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.crt
	I0403 19:19:45.311540   62736 certs.go:385] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.key.6f94e3bf -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.key
	I0403 19:19:45.311637   62736 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/proxy-client.key
	I0403 19:19:45.311663   62736 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/proxy-client.crt with IP's: []
	I0403 19:19:45.331685   62736 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/proxy-client.crt ...
	I0403 19:19:45.331715   62736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/proxy-client.crt: {Name:mk7ac42060222355800a4c4a1aacbc04aacae32f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:19:45.331880   62736 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/proxy-client.key ...
	I0403 19:19:45.331898   62736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/proxy-client.key: {Name:mk434ff38f3ed84bbcbb1e531743c98d95c5bbdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:19:45.332090   62736 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem (1338 bytes)
	W0403 19:19:45.332139   62736 certs.go:480] ignoring /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552_empty.pem, impossibly tiny 0 bytes
	I0403 19:19:45.332155   62736 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem (1675 bytes)
	I0403 19:19:45.332194   62736 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem (1082 bytes)
	I0403 19:19:45.332227   62736 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem (1123 bytes)
	I0403 19:19:45.332261   62736 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem (1675 bytes)
	I0403 19:19:45.332315   62736 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:19:45.332839   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0403 19:19:45.360309   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0403 19:19:45.385185   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0403 19:19:45.408356   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0403 19:19:45.433211   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0403 19:19:45.458296   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0403 19:19:45.483154   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0403 19:19:45.506360   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0403 19:19:45.527791   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem --> /usr/share/ca-certificates/21552.pem (1338 bytes)
	I0403 19:19:45.549031   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /usr/share/ca-certificates/215522.pem (1708 bytes)
	I0403 19:19:45.571792   62736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0403 19:19:45.593887   62736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0403 19:19:45.609188   62736 ssh_runner.go:195] Run: openssl version
	I0403 19:19:45.614670   62736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21552.pem && ln -fs /usr/share/ca-certificates/21552.pem /etc/ssl/certs/21552.pem"
	I0403 19:19:45.624351   62736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21552.pem
	I0403 19:19:45.628589   62736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  3 18:20 /usr/share/ca-certificates/21552.pem
	I0403 19:19:45.628651   62736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21552.pem
	I0403 19:19:45.636036   62736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21552.pem /etc/ssl/certs/51391683.0"
	I0403 19:19:45.646416   62736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215522.pem && ln -fs /usr/share/ca-certificates/215522.pem /etc/ssl/certs/215522.pem"
	I0403 19:19:45.656256   62736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215522.pem
	I0403 19:19:45.660682   62736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  3 18:20 /usr/share/ca-certificates/215522.pem
	I0403 19:19:45.660745   62736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215522.pem
	I0403 19:19:45.666027   62736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215522.pem /etc/ssl/certs/3ec20f2e.0"
	I0403 19:19:45.675985   62736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0403 19:19:45.686145   62736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:19:45.690336   62736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  3 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:19:45.690393   62736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:19:45.695799   62736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0403 19:19:45.710983   62736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0403 19:19:45.716089   62736 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0403 19:19:45.716137   62736 kubeadm.go:392] StartCluster: {Name:old-k8s-version-471019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-471019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:19:45.716200   62736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0403 19:19:45.716240   62736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0403 19:19:45.758160   62736 cri.go:89] found id: ""
	I0403 19:19:45.758230   62736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0403 19:19:45.768282   62736 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0403 19:19:45.778376   62736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:19:45.787231   62736 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 19:19:45.787254   62736 kubeadm.go:157] found existing configuration files:
	
	I0403 19:19:45.787304   62736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:19:45.795613   62736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 19:19:45.795690   62736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 19:19:45.804555   62736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:19:45.813111   62736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 19:19:45.813159   62736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 19:19:45.823998   62736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:19:45.837975   62736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 19:19:45.838035   62736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:19:45.851743   62736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:19:45.863408   62736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 19:19:45.863505   62736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:19:45.875774   62736 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0403 19:19:46.160151   62736 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:21:43.795497   62736 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0403 19:21:43.795657   62736 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0403 19:21:43.797410   62736 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0403 19:21:43.797522   62736 kubeadm.go:310] [preflight] Running pre-flight checks
	I0403 19:21:43.797709   62736 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0403 19:21:43.798069   62736 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0403 19:21:43.798315   62736 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0403 19:21:43.798598   62736 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0403 19:21:43.800594   62736 out.go:235]   - Generating certificates and keys ...
	I0403 19:21:43.800673   62736 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0403 19:21:43.800731   62736 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0403 19:21:43.800832   62736 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0403 19:21:43.800924   62736 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0403 19:21:43.801001   62736 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0403 19:21:43.801076   62736 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0403 19:21:43.801134   62736 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0403 19:21:43.801296   62736 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-471019] and IPs [192.168.61.209 127.0.0.1 ::1]
	I0403 19:21:43.801367   62736 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0403 19:21:43.801495   62736 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-471019] and IPs [192.168.61.209 127.0.0.1 ::1]
	I0403 19:21:43.801584   62736 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0403 19:21:43.801661   62736 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0403 19:21:43.801727   62736 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0403 19:21:43.801806   62736 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0403 19:21:43.801875   62736 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0403 19:21:43.801976   62736 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0403 19:21:43.802031   62736 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0403 19:21:43.802078   62736 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0403 19:21:43.802164   62736 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0403 19:21:43.802265   62736 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0403 19:21:43.802310   62736 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0403 19:21:43.802373   62736 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0403 19:21:43.803630   62736 out.go:235]   - Booting up control plane ...
	I0403 19:21:43.803700   62736 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0403 19:21:43.803764   62736 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0403 19:21:43.803823   62736 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0403 19:21:43.803917   62736 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0403 19:21:43.804079   62736 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0403 19:21:43.804150   62736 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0403 19:21:43.804227   62736 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:21:43.804462   62736 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:21:43.804564   62736 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:21:43.804790   62736 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:21:43.804887   62736 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:21:43.805088   62736 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:21:43.805149   62736 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:21:43.805329   62736 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:21:43.805396   62736 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:21:43.805556   62736 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:21:43.805563   62736 kubeadm.go:310] 
	I0403 19:21:43.805598   62736 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0403 19:21:43.805631   62736 kubeadm.go:310] 		timed out waiting for the condition
	I0403 19:21:43.805637   62736 kubeadm.go:310] 
	I0403 19:21:43.805666   62736 kubeadm.go:310] 	This error is likely caused by:
	I0403 19:21:43.805693   62736 kubeadm.go:310] 		- The kubelet is not running
	I0403 19:21:43.805797   62736 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0403 19:21:43.805814   62736 kubeadm.go:310] 
	I0403 19:21:43.805952   62736 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0403 19:21:43.805988   62736 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0403 19:21:43.806017   62736 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0403 19:21:43.806022   62736 kubeadm.go:310] 
	I0403 19:21:43.806115   62736 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0403 19:21:43.806186   62736 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0403 19:21:43.806192   62736 kubeadm.go:310] 
	I0403 19:21:43.806277   62736 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0403 19:21:43.806355   62736 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0403 19:21:43.806427   62736 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0403 19:21:43.806492   62736 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0403 19:21:43.806567   62736 kubeadm.go:310] 
	W0403 19:21:43.806622   62736 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-471019] and IPs [192.168.61.209 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-471019] and IPs [192.168.61.209 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-471019] and IPs [192.168.61.209 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-471019] and IPs [192.168.61.209 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0403 19:21:43.806651   62736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0403 19:21:44.262385   62736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:21:44.276629   62736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:21:44.285640   62736 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 19:21:44.285661   62736 kubeadm.go:157] found existing configuration files:
	
	I0403 19:21:44.285707   62736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:21:44.294502   62736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 19:21:44.294558   62736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 19:21:44.303081   62736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:21:44.311282   62736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 19:21:44.311330   62736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 19:21:44.319962   62736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:21:44.328031   62736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 19:21:44.328078   62736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:21:44.336507   62736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:21:44.344472   62736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 19:21:44.344522   62736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:21:44.354674   62736 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0403 19:21:44.417351   62736 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0403 19:21:44.417418   62736 kubeadm.go:310] [preflight] Running pre-flight checks
	I0403 19:21:44.549245   62736 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0403 19:21:44.549422   62736 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0403 19:21:44.549536   62736 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0403 19:21:44.709191   62736 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0403 19:21:44.711707   62736 out.go:235]   - Generating certificates and keys ...
	I0403 19:21:44.711811   62736 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0403 19:21:44.711897   62736 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0403 19:21:44.712011   62736 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0403 19:21:44.712108   62736 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0403 19:21:44.712214   62736 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0403 19:21:44.712294   62736 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0403 19:21:44.712390   62736 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0403 19:21:44.712477   62736 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0403 19:21:44.712582   62736 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0403 19:21:44.712707   62736 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0403 19:21:44.712771   62736 kubeadm.go:310] [certs] Using the existing "sa" key
	I0403 19:21:44.712847   62736 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0403 19:21:44.979714   62736 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0403 19:21:45.129581   62736 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0403 19:21:45.361486   62736 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0403 19:21:45.452688   62736 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0403 19:21:45.470833   62736 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0403 19:21:45.474051   62736 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0403 19:21:45.474122   62736 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0403 19:21:45.615577   62736 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0403 19:21:45.619032   62736 out.go:235]   - Booting up control plane ...
	I0403 19:21:45.619157   62736 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0403 19:21:45.626663   62736 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0403 19:21:45.629215   62736 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0403 19:21:45.630813   62736 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0403 19:21:45.633018   62736 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0403 19:22:25.636208   62736 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0403 19:22:25.636681   62736 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:22:25.636917   62736 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:22:30.637610   62736 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:22:30.637941   62736 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:22:40.638338   62736 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:22:40.638536   62736 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:23:00.637255   62736 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:23:00.637509   62736 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:23:40.636829   62736 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:23:40.637110   62736 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:23:40.637127   62736 kubeadm.go:310] 
	I0403 19:23:40.637184   62736 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0403 19:23:40.637479   62736 kubeadm.go:310] 		timed out waiting for the condition
	I0403 19:23:40.637503   62736 kubeadm.go:310] 
	I0403 19:23:40.637561   62736 kubeadm.go:310] 	This error is likely caused by:
	I0403 19:23:40.637614   62736 kubeadm.go:310] 		- The kubelet is not running
	I0403 19:23:40.637767   62736 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0403 19:23:40.637786   62736 kubeadm.go:310] 
	I0403 19:23:40.637939   62736 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0403 19:23:40.637991   62736 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0403 19:23:40.638039   62736 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0403 19:23:40.638049   62736 kubeadm.go:310] 
	I0403 19:23:40.638183   62736 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0403 19:23:40.638312   62736 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0403 19:23:40.638328   62736 kubeadm.go:310] 
	I0403 19:23:40.638459   62736 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0403 19:23:40.638585   62736 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0403 19:23:40.638697   62736 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0403 19:23:40.638803   62736 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0403 19:23:40.638877   62736 kubeadm.go:310] 
	I0403 19:23:40.640270   62736 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:23:40.640412   62736 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0403 19:23:40.640500   62736 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0403 19:23:40.640567   62736 kubeadm.go:394] duration metric: took 3m54.924431903s to StartCluster
	I0403 19:23:40.640605   62736 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:23:40.640662   62736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:23:40.698483   62736 cri.go:89] found id: ""
	I0403 19:23:40.698512   62736 logs.go:282] 0 containers: []
	W0403 19:23:40.698541   62736 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:23:40.698550   62736 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:23:40.698614   62736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:23:40.732385   62736 cri.go:89] found id: ""
	I0403 19:23:40.732411   62736 logs.go:282] 0 containers: []
	W0403 19:23:40.732420   62736 logs.go:284] No container was found matching "etcd"
	I0403 19:23:40.732427   62736 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:23:40.732486   62736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:23:40.773783   62736 cri.go:89] found id: ""
	I0403 19:23:40.773819   62736 logs.go:282] 0 containers: []
	W0403 19:23:40.773827   62736 logs.go:284] No container was found matching "coredns"
	I0403 19:23:40.773833   62736 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:23:40.773882   62736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:23:40.806839   62736 cri.go:89] found id: ""
	I0403 19:23:40.806866   62736 logs.go:282] 0 containers: []
	W0403 19:23:40.806876   62736 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:23:40.806883   62736 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:23:40.806946   62736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:23:40.843509   62736 cri.go:89] found id: ""
	I0403 19:23:40.843534   62736 logs.go:282] 0 containers: []
	W0403 19:23:40.843545   62736 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:23:40.843552   62736 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:23:40.843606   62736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:23:40.889463   62736 cri.go:89] found id: ""
	I0403 19:23:40.889490   62736 logs.go:282] 0 containers: []
	W0403 19:23:40.889501   62736 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:23:40.889509   62736 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:23:40.889565   62736 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:23:40.922464   62736 cri.go:89] found id: ""
	I0403 19:23:40.922496   62736 logs.go:282] 0 containers: []
	W0403 19:23:40.922507   62736 logs.go:284] No container was found matching "kindnet"
	I0403 19:23:40.922520   62736 logs.go:123] Gathering logs for kubelet ...
	I0403 19:23:40.922533   62736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:23:40.978707   62736 logs.go:123] Gathering logs for dmesg ...
	I0403 19:23:40.978737   62736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:23:40.992502   62736 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:23:40.992533   62736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:23:41.109954   62736 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:23:41.109976   62736 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:23:41.109993   62736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:23:41.216798   62736 logs.go:123] Gathering logs for container status ...
	I0403 19:23:41.216836   62736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0403 19:23:41.257407   62736 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0403 19:23:41.257470   62736 out.go:270] * 
	* 
	W0403 19:23:41.257527   62736 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:23:41.257544   62736 out.go:270] * 
	* 
	W0403 19:23:41.258550   62736 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0403 19:23:41.261472   62736 out.go:201] 
	W0403 19:23:41.262528   62736 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:23:41.262573   62736 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0403 19:23:41.262601   62736 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0403 19:23:41.263962   62736 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-471019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 6 (239.48083ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0403 19:23:41.554323   65837 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-471019" does not appear in /home/jenkins/minikube-integration/20591-14371/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-471019" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (287.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-471019 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-471019 create -f testdata/busybox.yaml: exit status 1 (44.838182ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-471019" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-471019 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 6 (225.954092ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0403 19:23:41.825779   65877 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-471019" does not appear in /home/jenkins/minikube-integration/20591-14371/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-471019" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 6 (223.182341ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0403 19:23:42.049650   65907 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-471019" does not appear in /home/jenkins/minikube-integration/20591-14371/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-471019" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (101.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-471019 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0403 19:24:09.255715   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:24:34.401661   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-471019 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m41.111414593s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-471019 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-471019 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-471019 describe deploy/metrics-server -n kube-system: exit status 1 (43.926044ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-471019" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-471019 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 6 (225.320583ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0403 19:25:23.430906   66604 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-471019" does not appear in /home/jenkins/minikube-integration/20591-14371/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-471019" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (101.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (509.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-471019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-471019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m27.956682386s)

                                                
                                                
-- stdout --
	* [old-k8s-version-471019] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-471019" primary control-plane node in "old-k8s-version-471019" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-471019" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 19:25:25.974651   66718 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:25:25.974766   66718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:25:25.974778   66718 out.go:358] Setting ErrFile to fd 2...
	I0403 19:25:25.974786   66718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:25:25.975036   66718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:25:25.975654   66718 out.go:352] Setting JSON to false
	I0403 19:25:25.976742   66718 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7671,"bootTime":1743700655,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:25:25.976843   66718 start.go:139] virtualization: kvm guest
	I0403 19:25:25.978938   66718 out.go:177] * [old-k8s-version-471019] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:25:25.980192   66718 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:25:25.980244   66718 notify.go:220] Checking for updates...
	I0403 19:25:25.982729   66718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:25:25.983876   66718 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:25:25.984976   66718 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:25:25.986032   66718 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:25:25.987189   66718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:25:25.988729   66718 config.go:182] Loaded profile config "old-k8s-version-471019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0403 19:25:25.989140   66718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:25:25.989187   66718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:25:26.004321   66718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0403 19:25:26.004797   66718 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:25:26.005380   66718 main.go:141] libmachine: Using API Version  1
	I0403 19:25:26.005408   66718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:25:26.005787   66718 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:25:26.006038   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:25:26.007976   66718 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0403 19:25:26.009175   66718 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:25:26.009665   66718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:25:26.009760   66718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:25:26.025909   66718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37097
	I0403 19:25:26.026337   66718 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:25:26.026774   66718 main.go:141] libmachine: Using API Version  1
	I0403 19:25:26.026799   66718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:25:26.027099   66718 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:25:26.027279   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:25:26.064618   66718 out.go:177] * Using the kvm2 driver based on existing profile
	I0403 19:25:26.065863   66718 start.go:297] selected driver: kvm2
	I0403 19:25:26.065886   66718 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-471019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-471019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:25:26.066006   66718 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:25:26.066677   66718 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:25:26.066740   66718 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 19:25:26.083007   66718 install.go:137] /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0403 19:25:26.083454   66718 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:25:26.083510   66718 cni.go:84] Creating CNI manager for ""
	I0403 19:25:26.083565   66718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:25:26.083627   66718 start.go:340] cluster config:
	{Name:old-k8s-version-471019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-471019 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:25:26.083729   66718 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:25:26.085503   66718 out.go:177] * Starting "old-k8s-version-471019" primary control-plane node in "old-k8s-version-471019" cluster
	I0403 19:25:26.086705   66718 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0403 19:25:26.086745   66718 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0403 19:25:26.086751   66718 cache.go:56] Caching tarball of preloaded images
	I0403 19:25:26.086937   66718 preload.go:172] Found /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0403 19:25:26.086951   66718 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0403 19:25:26.087035   66718 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/config.json ...
	I0403 19:25:26.087225   66718 start.go:360] acquireMachinesLock for old-k8s-version-471019: {Name:mk8972215f0ab94ca7966bf5adf18262e19bccd0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0403 19:25:26.087307   66718 start.go:364] duration metric: took 57.315µs to acquireMachinesLock for "old-k8s-version-471019"
	I0403 19:25:26.087328   66718 start.go:96] Skipping create...Using existing machine configuration
	I0403 19:25:26.087337   66718 fix.go:54] fixHost starting: 
	I0403 19:25:26.087581   66718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:25:26.087612   66718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:25:26.103038   66718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46389
	I0403 19:25:26.103514   66718 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:25:26.104134   66718 main.go:141] libmachine: Using API Version  1
	I0403 19:25:26.104172   66718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:25:26.104510   66718 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:25:26.104729   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:25:26.104886   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetState
	I0403 19:25:26.109166   66718 fix.go:112] recreateIfNeeded on old-k8s-version-471019: state=Stopped err=<nil>
	I0403 19:25:26.109198   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	W0403 19:25:26.109370   66718 fix.go:138] unexpected machine state, will restart: <nil>
	I0403 19:25:26.111669   66718 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-471019" ...
	I0403 19:25:26.112673   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .Start
	I0403 19:25:26.113166   66718 main.go:141] libmachine: (old-k8s-version-471019) starting domain...
	I0403 19:25:26.113182   66718 main.go:141] libmachine: (old-k8s-version-471019) ensuring networks are active...
	I0403 19:25:26.115618   66718 main.go:141] libmachine: (old-k8s-version-471019) Ensuring network default is active
	I0403 19:25:26.116116   66718 main.go:141] libmachine: (old-k8s-version-471019) Ensuring network mk-old-k8s-version-471019 is active
	I0403 19:25:26.116685   66718 main.go:141] libmachine: (old-k8s-version-471019) getting domain XML...
	I0403 19:25:26.117576   66718 main.go:141] libmachine: (old-k8s-version-471019) creating domain...
	I0403 19:25:27.807029   66718 main.go:141] libmachine: (old-k8s-version-471019) waiting for IP...
	I0403 19:25:27.808111   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:27.808680   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:27.808784   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:27.808665   66754 retry.go:31] will retry after 241.391065ms: waiting for domain to come up
	I0403 19:25:28.052475   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:28.053114   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:28.053143   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:28.053087   66754 retry.go:31] will retry after 237.047746ms: waiting for domain to come up
	I0403 19:25:28.291825   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:28.292580   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:28.292650   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:28.292570   66754 retry.go:31] will retry after 475.423232ms: waiting for domain to come up
	I0403 19:25:28.770128   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:28.770589   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:28.770615   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:28.770563   66754 retry.go:31] will retry after 427.450554ms: waiting for domain to come up
	I0403 19:25:29.199254   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:29.199793   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:29.199829   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:29.199757   66754 retry.go:31] will retry after 687.986424ms: waiting for domain to come up
	I0403 19:25:29.889149   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:29.889745   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:29.889800   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:29.889733   66754 retry.go:31] will retry after 772.983932ms: waiting for domain to come up
	I0403 19:25:30.663981   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:30.664464   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:30.664522   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:30.664433   66754 retry.go:31] will retry after 1.145077684s: waiting for domain to come up
	I0403 19:25:31.811297   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:31.811804   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:31.811831   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:31.811772   66754 retry.go:31] will retry after 1.048887662s: waiting for domain to come up
	I0403 19:25:32.862521   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:32.863072   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:32.863101   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:32.863041   66754 retry.go:31] will retry after 1.301002647s: waiting for domain to come up
	I0403 19:25:34.165430   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:34.165911   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:34.165977   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:34.165899   66754 retry.go:31] will retry after 2.035357043s: waiting for domain to come up
	I0403 19:25:36.202529   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:36.203008   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:36.203039   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:36.202969   66754 retry.go:31] will retry after 2.168806083s: waiting for domain to come up
	I0403 19:25:38.373588   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:38.374270   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:38.374316   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:38.374236   66754 retry.go:31] will retry after 3.062189483s: waiting for domain to come up
	I0403 19:25:41.437918   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:41.438527   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | unable to find current IP address of domain old-k8s-version-471019 in network mk-old-k8s-version-471019
	I0403 19:25:41.438556   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | I0403 19:25:41.438483   66754 retry.go:31] will retry after 3.46160239s: waiting for domain to come up
	I0403 19:25:44.903857   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:44.904342   66718 main.go:141] libmachine: (old-k8s-version-471019) found domain IP: 192.168.61.209
	I0403 19:25:44.904363   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has current primary IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:44.904369   66718 main.go:141] libmachine: (old-k8s-version-471019) reserving static IP address...
	I0403 19:25:44.904827   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "old-k8s-version-471019", mac: "52:54:00:0f:96:04", ip: "192.168.61.209"} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:44.904856   66718 main.go:141] libmachine: (old-k8s-version-471019) reserved static IP address 192.168.61.209 for domain old-k8s-version-471019
	I0403 19:25:44.904876   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | skip adding static IP to network mk-old-k8s-version-471019 - found existing host DHCP lease matching {name: "old-k8s-version-471019", mac: "52:54:00:0f:96:04", ip: "192.168.61.209"}
	I0403 19:25:44.904894   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | Getting to WaitForSSH function...
	I0403 19:25:44.904906   66718 main.go:141] libmachine: (old-k8s-version-471019) waiting for SSH...
	I0403 19:25:44.907068   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:44.907386   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:44.907403   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:44.907515   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | Using SSH client type: external
	I0403 19:25:44.907546   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | Using SSH private key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa (-rw-------)
	I0403 19:25:44.907583   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0403 19:25:44.907605   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | About to run SSH command:
	I0403 19:25:44.907618   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | exit 0
	I0403 19:25:45.034559   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | SSH cmd err, output: <nil>: 
	I0403 19:25:45.034930   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetConfigRaw
	I0403 19:25:45.035507   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetIP
	I0403 19:25:45.038309   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.038714   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:45.038751   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.039060   66718 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/config.json ...
	I0403 19:25:45.039323   66718 machine.go:93] provisionDockerMachine start ...
	I0403 19:25:45.039346   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:25:45.039574   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:25:45.041850   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.042200   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:45.042227   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.042452   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:25:45.042619   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:45.042785   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:45.042949   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:25:45.043161   66718 main.go:141] libmachine: Using SSH client type: native
	I0403 19:25:45.043405   66718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0403 19:25:45.043417   66718 main.go:141] libmachine: About to run SSH command:
	hostname
	I0403 19:25:45.150631   66718 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0403 19:25:45.150661   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetMachineName
	I0403 19:25:45.150908   66718 buildroot.go:166] provisioning hostname "old-k8s-version-471019"
	I0403 19:25:45.150937   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetMachineName
	I0403 19:25:45.151119   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:25:45.153648   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.154002   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:45.154040   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.154185   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:25:45.154354   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:45.154499   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:45.154598   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:25:45.154766   66718 main.go:141] libmachine: Using SSH client type: native
	I0403 19:25:45.154992   66718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0403 19:25:45.155017   66718 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-471019 && echo "old-k8s-version-471019" | sudo tee /etc/hostname
	I0403 19:25:45.280855   66718 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-471019
	
	I0403 19:25:45.280889   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:25:45.283759   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.284105   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:45.284132   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.284303   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:25:45.284484   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:45.284625   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:45.284770   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:25:45.284958   66718 main.go:141] libmachine: Using SSH client type: native
	I0403 19:25:45.285186   66718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0403 19:25:45.285212   66718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-471019' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-471019/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-471019' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0403 19:25:45.402912   66718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:25:45.402951   66718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20591-14371/.minikube CaCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20591-14371/.minikube}
	I0403 19:25:45.403008   66718 buildroot.go:174] setting up certificates
	I0403 19:25:45.403018   66718 provision.go:84] configureAuth start
	I0403 19:25:45.403027   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetMachineName
	I0403 19:25:45.403275   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetIP
	I0403 19:25:45.406311   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.406689   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:45.406718   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.406916   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:25:45.409336   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.409618   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:45.409643   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.409795   66718 provision.go:143] copyHostCerts
	I0403 19:25:45.409843   66718 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem, removing ...
	I0403 19:25:45.409865   66718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem
	I0403 19:25:45.409959   66718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem (1082 bytes)
	I0403 19:25:45.410089   66718 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem, removing ...
	I0403 19:25:45.410102   66718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem
	I0403 19:25:45.410141   66718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem (1123 bytes)
	I0403 19:25:45.410220   66718 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem, removing ...
	I0403 19:25:45.410231   66718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem
	I0403 19:25:45.410265   66718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem (1675 bytes)
	I0403 19:25:45.410334   66718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-471019 san=[127.0.0.1 192.168.61.209 localhost minikube old-k8s-version-471019]
	I0403 19:25:45.494417   66718 provision.go:177] copyRemoteCerts
	I0403 19:25:45.494476   66718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0403 19:25:45.494502   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:25:45.497045   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.497384   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:45.497405   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.497586   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:25:45.497774   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:45.498011   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:25:45.498162   66718 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa Username:docker}
	I0403 19:25:45.584838   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0403 19:25:45.608076   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0403 19:25:45.630678   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0403 19:25:45.652361   66718 provision.go:87] duration metric: took 249.330335ms to configureAuth
	I0403 19:25:45.652395   66718 buildroot.go:189] setting minikube options for container-runtime
	I0403 19:25:45.652641   66718 config.go:182] Loaded profile config "old-k8s-version-471019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0403 19:25:45.652732   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:25:45.655668   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.656094   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:45.656139   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.656351   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:25:45.656557   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:45.656693   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:45.656828   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:25:45.656944   66718 main.go:141] libmachine: Using SSH client type: native
	I0403 19:25:45.657195   66718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0403 19:25:45.657209   66718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0403 19:25:45.878226   66718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0403 19:25:45.878253   66718 machine.go:96] duration metric: took 838.914603ms to provisionDockerMachine
	I0403 19:25:45.878262   66718 start.go:293] postStartSetup for "old-k8s-version-471019" (driver="kvm2")
	I0403 19:25:45.878271   66718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0403 19:25:45.878286   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:25:45.878609   66718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0403 19:25:45.878650   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:25:45.881210   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.881536   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:45.881577   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:45.881750   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:25:45.881929   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:45.882065   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:25:45.882170   66718 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa Username:docker}
	I0403 19:25:45.965378   66718 ssh_runner.go:195] Run: cat /etc/os-release
	I0403 19:25:45.969423   66718 info.go:137] Remote host: Buildroot 2023.02.9
	I0403 19:25:45.969445   66718 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/addons for local assets ...
	I0403 19:25:45.969507   66718 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/files for local assets ...
	I0403 19:25:45.969602   66718 filesync.go:149] local asset: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem -> 215522.pem in /etc/ssl/certs
	I0403 19:25:45.969720   66718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0403 19:25:45.978616   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:25:46.000401   66718 start.go:296] duration metric: took 122.124501ms for postStartSetup
	I0403 19:25:46.000442   66718 fix.go:56] duration metric: took 19.913104236s for fixHost
	I0403 19:25:46.000466   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:25:46.003145   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:46.003534   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:46.003559   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:46.003742   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:25:46.003938   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:46.004111   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:46.004292   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:25:46.004428   66718 main.go:141] libmachine: Using SSH client type: native
	I0403 19:25:46.004625   66718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0403 19:25:46.004634   66718 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0403 19:25:46.111244   66718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743708346.085245205
	
	I0403 19:25:46.111265   66718 fix.go:216] guest clock: 1743708346.085245205
	I0403 19:25:46.111275   66718 fix.go:229] Guest: 2025-04-03 19:25:46.085245205 +0000 UTC Remote: 2025-04-03 19:25:46.000447297 +0000 UTC m=+20.066069444 (delta=84.797908ms)
	I0403 19:25:46.111299   66718 fix.go:200] guest clock delta is within tolerance: 84.797908ms
	I0403 19:25:46.111318   66718 start.go:83] releasing machines lock for "old-k8s-version-471019", held for 20.023985308s
	I0403 19:25:46.111342   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:25:46.111537   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetIP
	I0403 19:25:46.114018   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:46.114332   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:46.114363   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:46.114535   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:25:46.115012   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:25:46.115195   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .DriverName
	I0403 19:25:46.115313   66718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0403 19:25:46.115365   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:25:46.115421   66718 ssh_runner.go:195] Run: cat /version.json
	I0403 19:25:46.115445   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHHostname
	I0403 19:25:46.117735   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:46.118051   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:46.118083   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:46.118117   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:46.118289   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:25:46.118452   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:46.118554   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:46.118584   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:46.118596   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:25:46.118687   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHPort
	I0403 19:25:46.118752   66718 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa Username:docker}
	I0403 19:25:46.118790   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHKeyPath
	I0403 19:25:46.118915   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetSSHUsername
	I0403 19:25:46.119057   66718 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/old-k8s-version-471019/id_rsa Username:docker}
	I0403 19:25:46.226852   66718 ssh_runner.go:195] Run: systemctl --version
	I0403 19:25:46.232735   66718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0403 19:25:46.378960   66718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0403 19:25:46.385155   66718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0403 19:25:46.385220   66718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0403 19:25:46.401529   66718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0403 19:25:46.401552   66718 start.go:495] detecting cgroup driver to use...
	I0403 19:25:46.401616   66718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0403 19:25:46.419555   66718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0403 19:25:46.433005   66718 docker.go:217] disabling cri-docker service (if available) ...
	I0403 19:25:46.433065   66718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0403 19:25:46.446296   66718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0403 19:25:46.458468   66718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0403 19:25:46.569616   66718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0403 19:25:46.725316   66718 docker.go:233] disabling docker service ...
	I0403 19:25:46.725382   66718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0403 19:25:46.738400   66718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0403 19:25:46.750662   66718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0403 19:25:46.867372   66718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0403 19:25:46.992941   66718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0403 19:25:47.005994   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0403 19:25:47.023074   66718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0403 19:25:47.023132   66718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:25:47.032366   66718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0403 19:25:47.032449   66718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:25:47.042078   66718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:25:47.051531   66718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:25:47.061241   66718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0403 19:25:47.070741   66718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0403 19:25:47.079703   66718 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0403 19:25:47.079755   66718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0403 19:25:47.091448   66718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0403 19:25:47.099799   66718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:25:47.211022   66718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0403 19:25:47.301690   66718 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0403 19:25:47.301778   66718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0403 19:25:47.306697   66718 start.go:563] Will wait 60s for crictl version
	I0403 19:25:47.306758   66718 ssh_runner.go:195] Run: which crictl
	I0403 19:25:47.310214   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0403 19:25:47.347943   66718 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0403 19:25:47.348011   66718 ssh_runner.go:195] Run: crio --version
	I0403 19:25:47.376043   66718 ssh_runner.go:195] Run: crio --version
	I0403 19:25:47.408714   66718 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0403 19:25:47.409932   66718 main.go:141] libmachine: (old-k8s-version-471019) Calling .GetIP
	I0403 19:25:47.412561   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:47.412945   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:96:04", ip: ""} in network mk-old-k8s-version-471019: {Iface:virbr3 ExpiryTime:2025-04-03 20:25:37 +0000 UTC Type:0 Mac:52:54:00:0f:96:04 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:old-k8s-version-471019 Clientid:01:52:54:00:0f:96:04}
	I0403 19:25:47.412995   66718 main.go:141] libmachine: (old-k8s-version-471019) DBG | domain old-k8s-version-471019 has defined IP address 192.168.61.209 and MAC address 52:54:00:0f:96:04 in network mk-old-k8s-version-471019
	I0403 19:25:47.413202   66718 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0403 19:25:47.417014   66718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:25:47.428732   66718 kubeadm.go:883] updating cluster {Name:old-k8s-version-471019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-471019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0403 19:25:47.428857   66718 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0403 19:25:47.428912   66718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:25:47.475941   66718 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0403 19:25:47.476001   66718 ssh_runner.go:195] Run: which lz4
	I0403 19:25:47.479557   66718 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0403 19:25:47.483196   66718 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0403 19:25:47.483232   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0403 19:25:48.917076   66718 crio.go:462] duration metric: took 1.437574799s to copy over tarball
	I0403 19:25:48.917156   66718 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0403 19:25:51.774288   66718 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.857106629s)
	I0403 19:25:51.774312   66718 crio.go:469] duration metric: took 2.857209748s to extract the tarball
	I0403 19:25:51.774319   66718 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0403 19:25:51.817600   66718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:25:51.850943   66718 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0403 19:25:51.850969   66718 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0403 19:25:51.851035   66718 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:25:51.851048   66718 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:25:51.851092   66718 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0403 19:25:51.851125   66718 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0403 19:25:51.851134   66718 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:25:51.851274   66718 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0403 19:25:51.851057   66718 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:25:51.851823   66718 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:25:51.852960   66718 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:25:51.852984   66718 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0403 19:25:51.853062   66718 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:25:51.853099   66718 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:25:51.853233   66718 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:25:51.853246   66718 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0403 19:25:51.853768   66718 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0403 19:25:51.853955   66718 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:25:52.039430   66718 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:25:52.055081   66718 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:25:52.060357   66718 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:25:52.060357   66718 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0403 19:25:52.087697   66718 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0403 19:25:52.103384   66718 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0403 19:25:52.103427   66718 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:25:52.103471   66718 ssh_runner.go:195] Run: which crictl
	I0403 19:25:52.106055   66718 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0403 19:25:52.120700   66718 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:25:52.169362   66718 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0403 19:25:52.169419   66718 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:25:52.169468   66718 ssh_runner.go:195] Run: which crictl
	I0403 19:25:52.199173   66718 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0403 19:25:52.199216   66718 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0403 19:25:52.199249   66718 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0403 19:25:52.199263   66718 ssh_runner.go:195] Run: which crictl
	I0403 19:25:52.199287   66718 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:25:52.199320   66718 ssh_runner.go:195] Run: which crictl
	I0403 19:25:52.224445   66718 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0403 19:25:52.224481   66718 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0403 19:25:52.224515   66718 ssh_runner.go:195] Run: which crictl
	I0403 19:25:52.224519   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:25:52.224550   66718 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0403 19:25:52.224576   66718 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0403 19:25:52.224607   66718 ssh_runner.go:195] Run: which crictl
	I0403 19:25:52.228692   66718 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0403 19:25:52.228732   66718 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:25:52.228779   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:25:52.228780   66718 ssh_runner.go:195] Run: which crictl
	I0403 19:25:52.228825   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0403 19:25:52.228857   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:25:52.237835   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0403 19:25:52.237845   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0403 19:25:52.345828   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:25:52.352285   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:25:52.352381   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:25:52.352407   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0403 19:25:52.352447   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:25:52.363398   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0403 19:25:52.374142   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0403 19:25:52.503291   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0403 19:25:52.516660   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0403 19:25:52.516702   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0403 19:25:52.516702   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0403 19:25:52.516758   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:25:52.516788   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0403 19:25:52.516828   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0403 19:25:52.610797   66718 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0403 19:25:52.614541   66718 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0403 19:25:52.651303   66718 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0403 19:25:52.651350   66718 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0403 19:25:52.651378   66718 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0403 19:25:52.660843   66718 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0403 19:25:52.660854   66718 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0403 19:25:52.695607   66718 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0403 19:25:53.090221   66718 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:25:53.234026   66718 cache_images.go:92] duration metric: took 1.383041776s to LoadCachedImages
	W0403 19:25:53.234127   66718 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20591-14371/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0403 19:25:53.234146   66718 kubeadm.go:934] updating node { 192.168.61.209 8443 v1.20.0 crio true true} ...
	I0403 19:25:53.234249   66718 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-471019 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-471019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0403 19:25:53.234326   66718 ssh_runner.go:195] Run: crio config
	I0403 19:25:53.281989   66718 cni.go:84] Creating CNI manager for ""
	I0403 19:25:53.282013   66718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 19:25:53.282023   66718 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0403 19:25:53.282038   66718 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.209 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-471019 NodeName:old-k8s-version-471019 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0403 19:25:53.282173   66718 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-471019"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0403 19:25:53.282234   66718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0403 19:25:53.293237   66718 binaries.go:44] Found k8s binaries, skipping transfer
	I0403 19:25:53.293297   66718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0403 19:25:53.303393   66718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0403 19:25:53.318693   66718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0403 19:25:53.333735   66718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0403 19:25:53.349551   66718 ssh_runner.go:195] Run: grep 192.168.61.209	control-plane.minikube.internal$ /etc/hosts
	I0403 19:25:53.353089   66718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:25:53.364451   66718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:25:53.502474   66718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:25:53.519140   66718 certs.go:68] Setting up /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019 for IP: 192.168.61.209
	I0403 19:25:53.519177   66718 certs.go:194] generating shared ca certs ...
	I0403 19:25:53.519197   66718 certs.go:226] acquiring lock for ca certs: {Name:mk34f09724355b828d31eb1ee97771fd3b4645c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:25:53.519400   66718 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key
	I0403 19:25:53.519462   66718 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key
	I0403 19:25:53.519476   66718 certs.go:256] generating profile certs ...
	I0403 19:25:53.519605   66718 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/client.key
	I0403 19:25:53.519675   66718 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.key.6f94e3bf
	I0403 19:25:53.519727   66718 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/proxy-client.key
	I0403 19:25:53.519869   66718 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem (1338 bytes)
	W0403 19:25:53.519911   66718 certs.go:480] ignoring /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552_empty.pem, impossibly tiny 0 bytes
	I0403 19:25:53.519924   66718 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem (1675 bytes)
	I0403 19:25:53.519960   66718 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem (1082 bytes)
	I0403 19:25:53.520000   66718 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem (1123 bytes)
	I0403 19:25:53.520031   66718 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem (1675 bytes)
	I0403 19:25:53.520089   66718 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:25:53.520744   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0403 19:25:53.561155   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0403 19:25:53.598898   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0403 19:25:53.630366   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0403 19:25:53.665389   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0403 19:25:53.702123   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0403 19:25:53.729150   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0403 19:25:53.766721   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/old-k8s-version-471019/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0403 19:25:53.793226   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem --> /usr/share/ca-certificates/21552.pem (1338 bytes)
	I0403 19:25:53.816589   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /usr/share/ca-certificates/215522.pem (1708 bytes)
	I0403 19:25:53.839231   66718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0403 19:25:53.861854   66718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0403 19:25:53.880757   66718 ssh_runner.go:195] Run: openssl version
	I0403 19:25:53.886677   66718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215522.pem && ln -fs /usr/share/ca-certificates/215522.pem /etc/ssl/certs/215522.pem"
	I0403 19:25:53.897366   66718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215522.pem
	I0403 19:25:53.901736   66718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  3 18:20 /usr/share/ca-certificates/215522.pem
	I0403 19:25:53.901800   66718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215522.pem
	I0403 19:25:53.907390   66718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215522.pem /etc/ssl/certs/3ec20f2e.0"
	I0403 19:25:53.917917   66718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0403 19:25:53.930926   66718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:25:53.936391   66718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  3 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:25:53.936445   66718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:25:53.943534   66718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0403 19:25:53.955471   66718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21552.pem && ln -fs /usr/share/ca-certificates/21552.pem /etc/ssl/certs/21552.pem"
	I0403 19:25:53.965887   66718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21552.pem
	I0403 19:25:53.969955   66718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  3 18:20 /usr/share/ca-certificates/21552.pem
	I0403 19:25:53.970008   66718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21552.pem
	I0403 19:25:53.975542   66718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21552.pem /etc/ssl/certs/51391683.0"
	I0403 19:25:53.986250   66718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0403 19:25:53.990619   66718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0403 19:25:53.996147   66718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0403 19:25:54.001552   66718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0403 19:25:54.007046   66718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0403 19:25:54.012501   66718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0403 19:25:54.017842   66718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0403 19:25:54.023632   66718 kubeadm.go:392] StartCluster: {Name:old-k8s-version-471019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-471019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:25:54.023741   66718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0403 19:25:54.023805   66718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0403 19:25:54.062731   66718 cri.go:89] found id: ""
	I0403 19:25:54.062795   66718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0403 19:25:54.072484   66718 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0403 19:25:54.072507   66718 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0403 19:25:54.072556   66718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0403 19:25:54.081611   66718 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0403 19:25:54.082485   66718 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-471019" does not appear in /home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:25:54.083037   66718 kubeconfig.go:62] /home/jenkins/minikube-integration/20591-14371/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-471019" cluster setting kubeconfig missing "old-k8s-version-471019" context setting]
	I0403 19:25:54.083755   66718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/kubeconfig: {Name:mk73cf36c30cb8628f68d205e8cb0818a9975b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:25:54.180233   66718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0403 19:25:54.192290   66718 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.209
	I0403 19:25:54.192320   66718 kubeadm.go:1160] stopping kube-system containers ...
	I0403 19:25:54.192331   66718 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0403 19:25:54.192373   66718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0403 19:25:54.227305   66718 cri.go:89] found id: ""
	I0403 19:25:54.227397   66718 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0403 19:25:54.244102   66718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:25:54.253806   66718 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 19:25:54.253828   66718 kubeadm.go:157] found existing configuration files:
	
	I0403 19:25:54.253880   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:25:54.263362   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 19:25:54.263426   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 19:25:54.272791   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:25:54.281603   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 19:25:54.281685   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 19:25:54.291004   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:25:54.299246   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 19:25:54.299311   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:25:54.308180   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:25:54.317430   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 19:25:54.317488   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:25:54.326560   66718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0403 19:25:54.335801   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:25:54.566375   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:25:54.988454   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:25:55.217363   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:25:55.310015   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0403 19:25:55.411683   66718 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:25:55.411752   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:25:55.912249   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:25:56.412101   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:25:56.911986   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:25:57.412755   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:25:57.912552   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:25:58.411915   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:25:58.912838   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:25:59.412773   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:25:59.912500   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:00.412262   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:00.912169   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:01.412322   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:01.911885   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:02.412808   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:02.912041   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:03.411983   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:03.911829   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:04.411864   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:04.912635   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:05.412794   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:05.912005   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:06.412706   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:06.912009   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:07.412087   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:07.911880   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:08.412049   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:08.911882   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:09.411834   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:09.912296   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:10.411834   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:10.911856   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:11.412549   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:11.912896   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:12.412545   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:12.911949   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:13.411907   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:13.911851   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:14.412661   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:14.912726   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:15.412054   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:15.912041   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:16.412429   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:16.912608   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:17.412523   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:17.912218   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:18.412105   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:18.912118   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:19.412598   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:19.912008   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:20.411917   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:20.912047   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:21.412856   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:21.912394   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:22.412641   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:22.912101   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:23.411824   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:23.912050   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:24.412538   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:24.912865   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:25.411946   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:25.911967   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:26.412406   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:26.912614   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:27.411909   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:27.912265   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:28.412037   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:28.911940   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:29.411893   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:29.911888   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:30.411822   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:30.912033   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:31.412018   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:31.912041   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:32.411937   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:32.912127   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:33.412120   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:33.912544   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:34.412058   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:34.912285   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:35.412816   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:35.911862   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:36.412295   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:36.912011   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:37.412333   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:37.911988   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:38.412278   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:38.912069   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:39.412081   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:39.912611   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:40.412581   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:40.912766   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:41.412890   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:41.912836   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:42.411984   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:42.912255   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:43.412235   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:43.912344   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:44.412436   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:44.911985   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:45.412100   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:45.912186   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:46.412007   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:46.912022   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:47.412079   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:47.912019   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:48.412801   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:48.912785   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:49.412661   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:49.912662   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:50.412676   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:50.912010   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:51.411876   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:51.912290   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:52.412091   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:52.911958   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:53.411946   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:53.912871   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:54.412303   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:54.912435   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:55.412835   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:26:55.412905   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:26:55.451857   66718 cri.go:89] found id: ""
	I0403 19:26:55.451894   66718 logs.go:282] 0 containers: []
	W0403 19:26:55.451905   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:26:55.451912   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:26:55.451993   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:26:55.488599   66718 cri.go:89] found id: ""
	I0403 19:26:55.488627   66718 logs.go:282] 0 containers: []
	W0403 19:26:55.488649   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:26:55.488655   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:26:55.488710   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:26:55.523390   66718 cri.go:89] found id: ""
	I0403 19:26:55.523419   66718 logs.go:282] 0 containers: []
	W0403 19:26:55.523429   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:26:55.523436   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:26:55.523495   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:26:55.559802   66718 cri.go:89] found id: ""
	I0403 19:26:55.559825   66718 logs.go:282] 0 containers: []
	W0403 19:26:55.559833   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:26:55.559840   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:26:55.559901   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:26:55.590056   66718 cri.go:89] found id: ""
	I0403 19:26:55.590082   66718 logs.go:282] 0 containers: []
	W0403 19:26:55.590090   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:26:55.590095   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:26:55.590149   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:26:55.621274   66718 cri.go:89] found id: ""
	I0403 19:26:55.621298   66718 logs.go:282] 0 containers: []
	W0403 19:26:55.621308   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:26:55.621315   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:26:55.621373   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:26:55.653115   66718 cri.go:89] found id: ""
	I0403 19:26:55.653142   66718 logs.go:282] 0 containers: []
	W0403 19:26:55.653152   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:26:55.653158   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:26:55.653239   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:26:55.686186   66718 cri.go:89] found id: ""
	I0403 19:26:55.686213   66718 logs.go:282] 0 containers: []
	W0403 19:26:55.686224   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:26:55.686235   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:26:55.686248   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:26:55.738594   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:26:55.738632   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:26:55.753333   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:26:55.753363   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:26:55.870721   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:26:55.870745   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:26:55.870760   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:26:55.947483   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:26:55.947517   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:26:58.487101   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:26:58.499429   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:26:58.499505   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:26:58.538029   66718 cri.go:89] found id: ""
	I0403 19:26:58.538057   66718 logs.go:282] 0 containers: []
	W0403 19:26:58.538065   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:26:58.538071   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:26:58.538125   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:26:58.570291   66718 cri.go:89] found id: ""
	I0403 19:26:58.570318   66718 logs.go:282] 0 containers: []
	W0403 19:26:58.570325   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:26:58.570330   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:26:58.570372   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:26:58.602456   66718 cri.go:89] found id: ""
	I0403 19:26:58.602484   66718 logs.go:282] 0 containers: []
	W0403 19:26:58.602510   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:26:58.602517   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:26:58.602588   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:26:58.634711   66718 cri.go:89] found id: ""
	I0403 19:26:58.634739   66718 logs.go:282] 0 containers: []
	W0403 19:26:58.634749   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:26:58.634757   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:26:58.634814   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:26:58.666069   66718 cri.go:89] found id: ""
	I0403 19:26:58.666094   66718 logs.go:282] 0 containers: []
	W0403 19:26:58.666101   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:26:58.666105   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:26:58.666160   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:26:58.701317   66718 cri.go:89] found id: ""
	I0403 19:26:58.701341   66718 logs.go:282] 0 containers: []
	W0403 19:26:58.701351   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:26:58.701358   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:26:58.701416   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:26:58.752214   66718 cri.go:89] found id: ""
	I0403 19:26:58.752235   66718 logs.go:282] 0 containers: []
	W0403 19:26:58.752244   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:26:58.752250   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:26:58.752304   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:26:58.795554   66718 cri.go:89] found id: ""
	I0403 19:26:58.795576   66718 logs.go:282] 0 containers: []
	W0403 19:26:58.795582   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:26:58.795591   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:26:58.795604   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:26:58.847545   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:26:58.847596   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:26:58.862213   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:26:58.862235   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:26:58.929813   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:26:58.929834   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:26:58.929850   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:26:59.003534   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:26:59.003566   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:01.541581   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:01.554477   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:01.554546   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:01.596560   66718 cri.go:89] found id: ""
	I0403 19:27:01.596587   66718 logs.go:282] 0 containers: []
	W0403 19:27:01.596598   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:01.596604   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:01.596666   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:01.627533   66718 cri.go:89] found id: ""
	I0403 19:27:01.627555   66718 logs.go:282] 0 containers: []
	W0403 19:27:01.627564   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:01.627571   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:01.627629   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:01.659752   66718 cri.go:89] found id: ""
	I0403 19:27:01.659776   66718 logs.go:282] 0 containers: []
	W0403 19:27:01.659808   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:01.659819   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:01.659870   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:01.691045   66718 cri.go:89] found id: ""
	I0403 19:27:01.691071   66718 logs.go:282] 0 containers: []
	W0403 19:27:01.691081   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:01.691087   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:01.691148   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:01.729043   66718 cri.go:89] found id: ""
	I0403 19:27:01.729068   66718 logs.go:282] 0 containers: []
	W0403 19:27:01.729079   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:01.729100   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:01.729162   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:01.767530   66718 cri.go:89] found id: ""
	I0403 19:27:01.767552   66718 logs.go:282] 0 containers: []
	W0403 19:27:01.767560   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:01.767564   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:01.767608   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:01.803459   66718 cri.go:89] found id: ""
	I0403 19:27:01.803483   66718 logs.go:282] 0 containers: []
	W0403 19:27:01.803491   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:01.803497   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:01.803554   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:01.835100   66718 cri.go:89] found id: ""
	I0403 19:27:01.835124   66718 logs.go:282] 0 containers: []
	W0403 19:27:01.835132   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:01.835142   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:01.835155   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:01.912129   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:01.912152   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:01.912168   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:01.992678   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:01.992711   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:02.029266   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:02.029297   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:02.078557   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:02.078593   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:04.591717   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:04.605733   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:04.605792   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:04.642035   66718 cri.go:89] found id: ""
	I0403 19:27:04.642066   66718 logs.go:282] 0 containers: []
	W0403 19:27:04.642076   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:04.642084   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:04.642143   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:04.684338   66718 cri.go:89] found id: ""
	I0403 19:27:04.684363   66718 logs.go:282] 0 containers: []
	W0403 19:27:04.684372   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:04.684377   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:04.684438   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:04.720573   66718 cri.go:89] found id: ""
	I0403 19:27:04.720611   66718 logs.go:282] 0 containers: []
	W0403 19:27:04.720622   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:04.720629   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:04.720689   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:04.753252   66718 cri.go:89] found id: ""
	I0403 19:27:04.753280   66718 logs.go:282] 0 containers: []
	W0403 19:27:04.753288   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:04.753292   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:04.753340   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:04.787599   66718 cri.go:89] found id: ""
	I0403 19:27:04.787623   66718 logs.go:282] 0 containers: []
	W0403 19:27:04.787629   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:04.787635   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:04.787686   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:04.820164   66718 cri.go:89] found id: ""
	I0403 19:27:04.820197   66718 logs.go:282] 0 containers: []
	W0403 19:27:04.820218   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:04.820225   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:04.820299   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:04.853444   66718 cri.go:89] found id: ""
	I0403 19:27:04.853477   66718 logs.go:282] 0 containers: []
	W0403 19:27:04.853487   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:04.853494   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:04.853552   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:04.887670   66718 cri.go:89] found id: ""
	I0403 19:27:04.887699   66718 logs.go:282] 0 containers: []
	W0403 19:27:04.887710   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:04.887719   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:04.887731   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:04.925150   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:04.925182   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:04.978873   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:04.978907   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:04.992471   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:04.992495   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:05.062249   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:05.062273   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:05.062287   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:07.640872   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:07.653675   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:07.653748   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:07.686754   66718 cri.go:89] found id: ""
	I0403 19:27:07.686782   66718 logs.go:282] 0 containers: []
	W0403 19:27:07.686791   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:07.686799   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:07.686871   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:07.719506   66718 cri.go:89] found id: ""
	I0403 19:27:07.719534   66718 logs.go:282] 0 containers: []
	W0403 19:27:07.719543   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:07.719551   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:07.719609   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:07.757932   66718 cri.go:89] found id: ""
	I0403 19:27:07.757953   66718 logs.go:282] 0 containers: []
	W0403 19:27:07.757961   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:07.757966   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:07.758008   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:07.795074   66718 cri.go:89] found id: ""
	I0403 19:27:07.795094   66718 logs.go:282] 0 containers: []
	W0403 19:27:07.795100   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:07.795104   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:07.795156   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:07.831471   66718 cri.go:89] found id: ""
	I0403 19:27:07.831495   66718 logs.go:282] 0 containers: []
	W0403 19:27:07.831502   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:07.831507   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:07.831560   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:07.863995   66718 cri.go:89] found id: ""
	I0403 19:27:07.864021   66718 logs.go:282] 0 containers: []
	W0403 19:27:07.864028   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:07.864034   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:07.864087   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:07.900688   66718 cri.go:89] found id: ""
	I0403 19:27:07.900718   66718 logs.go:282] 0 containers: []
	W0403 19:27:07.900726   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:07.900749   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:07.900799   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:07.932571   66718 cri.go:89] found id: ""
	I0403 19:27:07.932601   66718 logs.go:282] 0 containers: []
	W0403 19:27:07.932610   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:07.932618   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:07.932629   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:08.001233   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:08.001260   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:08.001273   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:08.074762   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:08.074796   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:08.114673   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:08.114700   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:08.163940   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:08.163976   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:10.678951   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:10.692004   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:10.692064   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:10.726155   66718 cri.go:89] found id: ""
	I0403 19:27:10.726185   66718 logs.go:282] 0 containers: []
	W0403 19:27:10.726197   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:10.726205   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:10.726264   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:10.759168   66718 cri.go:89] found id: ""
	I0403 19:27:10.759205   66718 logs.go:282] 0 containers: []
	W0403 19:27:10.759212   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:10.759218   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:10.759265   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:10.789274   66718 cri.go:89] found id: ""
	I0403 19:27:10.789300   66718 logs.go:282] 0 containers: []
	W0403 19:27:10.789310   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:10.789317   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:10.789378   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:10.829204   66718 cri.go:89] found id: ""
	I0403 19:27:10.829236   66718 logs.go:282] 0 containers: []
	W0403 19:27:10.829243   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:10.829249   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:10.829299   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:10.877507   66718 cri.go:89] found id: ""
	I0403 19:27:10.877533   66718 logs.go:282] 0 containers: []
	W0403 19:27:10.877543   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:10.877549   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:10.877612   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:10.922667   66718 cri.go:89] found id: ""
	I0403 19:27:10.922694   66718 logs.go:282] 0 containers: []
	W0403 19:27:10.922703   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:10.922711   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:10.922774   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:10.978380   66718 cri.go:89] found id: ""
	I0403 19:27:10.978407   66718 logs.go:282] 0 containers: []
	W0403 19:27:10.978416   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:10.978422   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:10.978471   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:11.014222   66718 cri.go:89] found id: ""
	I0403 19:27:11.014245   66718 logs.go:282] 0 containers: []
	W0403 19:27:11.014252   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:11.014260   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:11.014270   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:11.064654   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:11.064689   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:11.077315   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:11.077341   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:11.152120   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:11.152153   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:11.152167   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:11.236672   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:11.236714   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:13.774949   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:13.789312   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:13.789383   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:13.822639   66718 cri.go:89] found id: ""
	I0403 19:27:13.822672   66718 logs.go:282] 0 containers: []
	W0403 19:27:13.822681   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:13.822687   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:13.822748   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:13.855214   66718 cri.go:89] found id: ""
	I0403 19:27:13.855247   66718 logs.go:282] 0 containers: []
	W0403 19:27:13.855259   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:13.855268   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:13.855327   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:13.889930   66718 cri.go:89] found id: ""
	I0403 19:27:13.889959   66718 logs.go:282] 0 containers: []
	W0403 19:27:13.889972   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:13.889981   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:13.890029   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:13.928278   66718 cri.go:89] found id: ""
	I0403 19:27:13.928301   66718 logs.go:282] 0 containers: []
	W0403 19:27:13.928308   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:13.928313   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:13.928357   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:13.961717   66718 cri.go:89] found id: ""
	I0403 19:27:13.961740   66718 logs.go:282] 0 containers: []
	W0403 19:27:13.961747   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:13.961753   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:13.961807   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:13.995051   66718 cri.go:89] found id: ""
	I0403 19:27:13.995082   66718 logs.go:282] 0 containers: []
	W0403 19:27:13.995092   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:13.995100   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:13.995161   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:14.029953   66718 cri.go:89] found id: ""
	I0403 19:27:14.029985   66718 logs.go:282] 0 containers: []
	W0403 19:27:14.029997   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:14.030005   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:14.030069   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:14.068333   66718 cri.go:89] found id: ""
	I0403 19:27:14.068361   66718 logs.go:282] 0 containers: []
	W0403 19:27:14.068369   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:14.068377   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:14.068389   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:14.120435   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:14.120469   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:14.133314   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:14.133340   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:14.199000   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:14.199026   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:14.199042   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:14.282539   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:14.282576   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:16.820158   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:16.833205   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:16.833267   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:16.866908   66718 cri.go:89] found id: ""
	I0403 19:27:16.866941   66718 logs.go:282] 0 containers: []
	W0403 19:27:16.866951   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:16.866957   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:16.867017   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:16.904848   66718 cri.go:89] found id: ""
	I0403 19:27:16.904889   66718 logs.go:282] 0 containers: []
	W0403 19:27:16.904901   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:16.904909   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:16.904966   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:16.941052   66718 cri.go:89] found id: ""
	I0403 19:27:16.941078   66718 logs.go:282] 0 containers: []
	W0403 19:27:16.941101   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:16.941117   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:16.941177   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:16.977616   66718 cri.go:89] found id: ""
	I0403 19:27:16.977641   66718 logs.go:282] 0 containers: []
	W0403 19:27:16.977652   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:16.977660   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:16.977720   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:17.013382   66718 cri.go:89] found id: ""
	I0403 19:27:17.013405   66718 logs.go:282] 0 containers: []
	W0403 19:27:17.013412   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:17.013417   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:17.013474   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:17.049380   66718 cri.go:89] found id: ""
	I0403 19:27:17.049402   66718 logs.go:282] 0 containers: []
	W0403 19:27:17.049409   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:17.049415   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:17.049467   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:17.081059   66718 cri.go:89] found id: ""
	I0403 19:27:17.081088   66718 logs.go:282] 0 containers: []
	W0403 19:27:17.081099   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:17.081105   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:17.081163   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:17.111973   66718 cri.go:89] found id: ""
	I0403 19:27:17.112004   66718 logs.go:282] 0 containers: []
	W0403 19:27:17.112016   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:17.112026   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:17.112040   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:17.181515   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:17.181542   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:17.181553   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:17.260793   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:17.260831   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:17.300979   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:17.301010   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:17.348944   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:17.348977   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:19.863419   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:19.876728   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:19.876808   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:19.911302   66718 cri.go:89] found id: ""
	I0403 19:27:19.911333   66718 logs.go:282] 0 containers: []
	W0403 19:27:19.911344   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:19.911351   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:19.911408   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:19.942691   66718 cri.go:89] found id: ""
	I0403 19:27:19.942718   66718 logs.go:282] 0 containers: []
	W0403 19:27:19.942725   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:19.942730   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:19.942771   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:19.972353   66718 cri.go:89] found id: ""
	I0403 19:27:19.972379   66718 logs.go:282] 0 containers: []
	W0403 19:27:19.972387   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:19.972392   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:19.972450   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:20.005931   66718 cri.go:89] found id: ""
	I0403 19:27:20.005955   66718 logs.go:282] 0 containers: []
	W0403 19:27:20.005980   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:20.005988   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:20.006050   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:20.038640   66718 cri.go:89] found id: ""
	I0403 19:27:20.038668   66718 logs.go:282] 0 containers: []
	W0403 19:27:20.038678   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:20.038685   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:20.038742   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:20.069547   66718 cri.go:89] found id: ""
	I0403 19:27:20.069570   66718 logs.go:282] 0 containers: []
	W0403 19:27:20.069577   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:20.069583   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:20.069628   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:20.100911   66718 cri.go:89] found id: ""
	I0403 19:27:20.100952   66718 logs.go:282] 0 containers: []
	W0403 19:27:20.100960   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:20.100973   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:20.101031   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:20.132392   66718 cri.go:89] found id: ""
	I0403 19:27:20.132418   66718 logs.go:282] 0 containers: []
	W0403 19:27:20.132425   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:20.132434   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:20.132443   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:20.188392   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:20.188424   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:20.202119   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:20.202151   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:20.264471   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:20.264494   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:20.264508   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:20.343293   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:20.343329   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:22.881529   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:22.894692   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:22.894770   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:22.927212   66718 cri.go:89] found id: ""
	I0403 19:27:22.927244   66718 logs.go:282] 0 containers: []
	W0403 19:27:22.927255   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:22.927263   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:22.927328   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:22.958861   66718 cri.go:89] found id: ""
	I0403 19:27:22.958921   66718 logs.go:282] 0 containers: []
	W0403 19:27:22.958930   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:22.958936   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:22.958990   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:22.993506   66718 cri.go:89] found id: ""
	I0403 19:27:22.993537   66718 logs.go:282] 0 containers: []
	W0403 19:27:22.993548   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:22.993555   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:22.993617   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:23.024611   66718 cri.go:89] found id: ""
	I0403 19:27:23.024641   66718 logs.go:282] 0 containers: []
	W0403 19:27:23.024652   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:23.024659   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:23.024717   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:23.059033   66718 cri.go:89] found id: ""
	I0403 19:27:23.059072   66718 logs.go:282] 0 containers: []
	W0403 19:27:23.059080   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:23.059085   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:23.059146   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:23.092303   66718 cri.go:89] found id: ""
	I0403 19:27:23.092332   66718 logs.go:282] 0 containers: []
	W0403 19:27:23.092342   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:23.092349   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:23.092405   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:23.123968   66718 cri.go:89] found id: ""
	I0403 19:27:23.123991   66718 logs.go:282] 0 containers: []
	W0403 19:27:23.124005   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:23.124010   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:23.124060   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:23.155466   66718 cri.go:89] found id: ""
	I0403 19:27:23.155493   66718 logs.go:282] 0 containers: []
	W0403 19:27:23.155505   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:23.155516   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:23.155528   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:23.204260   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:23.204294   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:23.216495   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:23.216522   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:23.279665   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:23.279695   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:23.279710   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:23.358337   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:23.358369   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:25.894953   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:25.909351   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:25.909423   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:25.947950   66718 cri.go:89] found id: ""
	I0403 19:27:25.947977   66718 logs.go:282] 0 containers: []
	W0403 19:27:25.947986   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:25.947992   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:25.948055   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:25.984284   66718 cri.go:89] found id: ""
	I0403 19:27:25.984311   66718 logs.go:282] 0 containers: []
	W0403 19:27:25.984321   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:25.984329   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:25.984382   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:26.020704   66718 cri.go:89] found id: ""
	I0403 19:27:26.020732   66718 logs.go:282] 0 containers: []
	W0403 19:27:26.020743   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:26.020750   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:26.020804   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:26.056632   66718 cri.go:89] found id: ""
	I0403 19:27:26.056657   66718 logs.go:282] 0 containers: []
	W0403 19:27:26.056665   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:26.056670   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:26.056724   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:26.087961   66718 cri.go:89] found id: ""
	I0403 19:27:26.087991   66718 logs.go:282] 0 containers: []
	W0403 19:27:26.087999   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:26.088004   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:26.088063   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:26.125496   66718 cri.go:89] found id: ""
	I0403 19:27:26.125540   66718 logs.go:282] 0 containers: []
	W0403 19:27:26.125550   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:26.125558   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:26.125612   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:26.163672   66718 cri.go:89] found id: ""
	I0403 19:27:26.163699   66718 logs.go:282] 0 containers: []
	W0403 19:27:26.163710   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:26.163717   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:26.163774   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:26.197328   66718 cri.go:89] found id: ""
	I0403 19:27:26.197354   66718 logs.go:282] 0 containers: []
	W0403 19:27:26.197361   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:26.197370   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:26.197391   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:26.280006   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:26.280043   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:26.318599   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:26.318627   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:26.372291   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:26.372333   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:26.385450   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:26.385482   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:26.453279   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:28.954961   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:28.967416   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:28.967484   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:28.999215   66718 cri.go:89] found id: ""
	I0403 19:27:28.999245   66718 logs.go:282] 0 containers: []
	W0403 19:27:28.999255   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:28.999262   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:28.999318   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:29.029676   66718 cri.go:89] found id: ""
	I0403 19:27:29.029706   66718 logs.go:282] 0 containers: []
	W0403 19:27:29.029717   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:29.029724   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:29.029779   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:29.059641   66718 cri.go:89] found id: ""
	I0403 19:27:29.059666   66718 logs.go:282] 0 containers: []
	W0403 19:27:29.059676   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:29.059683   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:29.059745   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:29.096798   66718 cri.go:89] found id: ""
	I0403 19:27:29.096827   66718 logs.go:282] 0 containers: []
	W0403 19:27:29.096837   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:29.096844   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:29.096901   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:29.128503   66718 cri.go:89] found id: ""
	I0403 19:27:29.128532   66718 logs.go:282] 0 containers: []
	W0403 19:27:29.128542   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:29.128549   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:29.128608   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:29.172253   66718 cri.go:89] found id: ""
	I0403 19:27:29.172280   66718 logs.go:282] 0 containers: []
	W0403 19:27:29.172287   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:29.172293   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:29.172350   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:29.204168   66718 cri.go:89] found id: ""
	I0403 19:27:29.204200   66718 logs.go:282] 0 containers: []
	W0403 19:27:29.204210   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:29.204217   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:29.204278   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:29.236620   66718 cri.go:89] found id: ""
	I0403 19:27:29.236649   66718 logs.go:282] 0 containers: []
	W0403 19:27:29.236657   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:29.236665   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:29.236675   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:29.286000   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:29.286031   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:29.299774   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:29.299802   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:29.374591   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:29.374617   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:29.374633   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:29.450363   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:29.450400   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:31.990006   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:32.004041   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:32.004113   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:32.041043   66718 cri.go:89] found id: ""
	I0403 19:27:32.041071   66718 logs.go:282] 0 containers: []
	W0403 19:27:32.041080   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:32.041086   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:32.041130   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:32.076047   66718 cri.go:89] found id: ""
	I0403 19:27:32.076079   66718 logs.go:282] 0 containers: []
	W0403 19:27:32.076088   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:32.076096   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:32.076151   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:32.110023   66718 cri.go:89] found id: ""
	I0403 19:27:32.110050   66718 logs.go:282] 0 containers: []
	W0403 19:27:32.110061   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:32.110068   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:32.110126   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:32.142781   66718 cri.go:89] found id: ""
	I0403 19:27:32.142811   66718 logs.go:282] 0 containers: []
	W0403 19:27:32.142838   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:32.142847   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:32.142904   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:32.174405   66718 cri.go:89] found id: ""
	I0403 19:27:32.174437   66718 logs.go:282] 0 containers: []
	W0403 19:27:32.174448   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:32.174455   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:32.174507   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:32.206691   66718 cri.go:89] found id: ""
	I0403 19:27:32.206721   66718 logs.go:282] 0 containers: []
	W0403 19:27:32.206733   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:32.206742   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:32.206809   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:32.238459   66718 cri.go:89] found id: ""
	I0403 19:27:32.238491   66718 logs.go:282] 0 containers: []
	W0403 19:27:32.238499   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:32.238504   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:32.238550   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:32.270120   66718 cri.go:89] found id: ""
	I0403 19:27:32.270151   66718 logs.go:282] 0 containers: []
	W0403 19:27:32.270167   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:32.270179   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:32.270191   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:32.353645   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:32.353679   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:32.390706   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:32.390736   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:32.441503   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:32.441534   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:32.454347   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:32.454376   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:32.517653   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:35.018317   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:35.030880   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:35.030944   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:35.061462   66718 cri.go:89] found id: ""
	I0403 19:27:35.061484   66718 logs.go:282] 0 containers: []
	W0403 19:27:35.061494   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:35.061502   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:35.061559   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:35.091996   66718 cri.go:89] found id: ""
	I0403 19:27:35.092026   66718 logs.go:282] 0 containers: []
	W0403 19:27:35.092037   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:35.092044   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:35.092102   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:35.122867   66718 cri.go:89] found id: ""
	I0403 19:27:35.122893   66718 logs.go:282] 0 containers: []
	W0403 19:27:35.122903   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:35.122910   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:35.122972   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:35.153616   66718 cri.go:89] found id: ""
	I0403 19:27:35.153640   66718 logs.go:282] 0 containers: []
	W0403 19:27:35.153647   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:35.153653   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:35.153705   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:35.191922   66718 cri.go:89] found id: ""
	I0403 19:27:35.191950   66718 logs.go:282] 0 containers: []
	W0403 19:27:35.191961   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:35.191969   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:35.192030   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:35.223389   66718 cri.go:89] found id: ""
	I0403 19:27:35.223415   66718 logs.go:282] 0 containers: []
	W0403 19:27:35.223423   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:35.223430   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:35.223484   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:35.254033   66718 cri.go:89] found id: ""
	I0403 19:27:35.254057   66718 logs.go:282] 0 containers: []
	W0403 19:27:35.254065   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:35.254070   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:35.254130   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:35.287182   66718 cri.go:89] found id: ""
	I0403 19:27:35.287210   66718 logs.go:282] 0 containers: []
	W0403 19:27:35.287220   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:35.287230   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:35.287243   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:35.325091   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:35.325116   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:35.378286   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:35.378321   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:35.391322   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:35.391349   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:35.465736   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:35.465764   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:35.465778   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:38.042202   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:38.056717   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:38.056776   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:38.092979   66718 cri.go:89] found id: ""
	I0403 19:27:38.093010   66718 logs.go:282] 0 containers: []
	W0403 19:27:38.093021   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:38.093028   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:38.093098   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:38.125850   66718 cri.go:89] found id: ""
	I0403 19:27:38.125879   66718 logs.go:282] 0 containers: []
	W0403 19:27:38.125890   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:38.125897   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:38.125959   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:38.159841   66718 cri.go:89] found id: ""
	I0403 19:27:38.159873   66718 logs.go:282] 0 containers: []
	W0403 19:27:38.159885   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:38.159892   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:38.159954   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:38.192596   66718 cri.go:89] found id: ""
	I0403 19:27:38.192626   66718 logs.go:282] 0 containers: []
	W0403 19:27:38.192637   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:38.192644   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:38.192698   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:38.225109   66718 cri.go:89] found id: ""
	I0403 19:27:38.225134   66718 logs.go:282] 0 containers: []
	W0403 19:27:38.225141   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:38.225149   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:38.225214   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:38.259622   66718 cri.go:89] found id: ""
	I0403 19:27:38.259649   66718 logs.go:282] 0 containers: []
	W0403 19:27:38.259659   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:38.259666   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:38.259725   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:38.295982   66718 cri.go:89] found id: ""
	I0403 19:27:38.296010   66718 logs.go:282] 0 containers: []
	W0403 19:27:38.296021   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:38.296031   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:38.296094   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:38.329193   66718 cri.go:89] found id: ""
	I0403 19:27:38.329223   66718 logs.go:282] 0 containers: []
	W0403 19:27:38.329233   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:38.329241   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:38.329252   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:38.412669   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:38.412704   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:38.457747   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:38.457776   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:38.512641   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:38.512683   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:38.526848   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:38.526875   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:38.606032   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:41.106993   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:41.119845   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:41.119934   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:41.155479   66718 cri.go:89] found id: ""
	I0403 19:27:41.155510   66718 logs.go:282] 0 containers: []
	W0403 19:27:41.155521   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:41.155529   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:41.155590   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:41.191479   66718 cri.go:89] found id: ""
	I0403 19:27:41.191512   66718 logs.go:282] 0 containers: []
	W0403 19:27:41.191523   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:41.191530   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:41.191585   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:41.228336   66718 cri.go:89] found id: ""
	I0403 19:27:41.228367   66718 logs.go:282] 0 containers: []
	W0403 19:27:41.228378   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:41.228385   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:41.228496   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:41.267069   66718 cri.go:89] found id: ""
	I0403 19:27:41.267097   66718 logs.go:282] 0 containers: []
	W0403 19:27:41.267107   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:41.267114   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:41.267173   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:41.301863   66718 cri.go:89] found id: ""
	I0403 19:27:41.301912   66718 logs.go:282] 0 containers: []
	W0403 19:27:41.301919   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:41.301925   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:41.301973   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:41.337027   66718 cri.go:89] found id: ""
	I0403 19:27:41.337051   66718 logs.go:282] 0 containers: []
	W0403 19:27:41.337058   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:41.337064   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:41.337117   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:41.377365   66718 cri.go:89] found id: ""
	I0403 19:27:41.377400   66718 logs.go:282] 0 containers: []
	W0403 19:27:41.377410   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:41.377418   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:41.377492   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:41.414780   66718 cri.go:89] found id: ""
	I0403 19:27:41.414811   66718 logs.go:282] 0 containers: []
	W0403 19:27:41.414838   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:41.414850   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:41.414865   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:41.492674   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:41.492710   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:41.535928   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:41.535969   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:41.588431   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:41.588469   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:41.605162   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:41.605203   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:41.678175   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:44.179028   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:44.192298   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:44.192374   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:44.227171   66718 cri.go:89] found id: ""
	I0403 19:27:44.227194   66718 logs.go:282] 0 containers: []
	W0403 19:27:44.227205   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:44.227212   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:44.227273   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:44.259871   66718 cri.go:89] found id: ""
	I0403 19:27:44.259900   66718 logs.go:282] 0 containers: []
	W0403 19:27:44.259909   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:44.259916   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:44.259973   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:44.296306   66718 cri.go:89] found id: ""
	I0403 19:27:44.296330   66718 logs.go:282] 0 containers: []
	W0403 19:27:44.296339   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:44.296346   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:44.296398   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:44.332505   66718 cri.go:89] found id: ""
	I0403 19:27:44.332532   66718 logs.go:282] 0 containers: []
	W0403 19:27:44.332542   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:44.332549   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:44.332609   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:44.365692   66718 cri.go:89] found id: ""
	I0403 19:27:44.365715   66718 logs.go:282] 0 containers: []
	W0403 19:27:44.365723   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:44.365729   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:44.365789   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:44.400638   66718 cri.go:89] found id: ""
	I0403 19:27:44.400663   66718 logs.go:282] 0 containers: []
	W0403 19:27:44.400673   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:44.400681   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:44.400738   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:44.436877   66718 cri.go:89] found id: ""
	I0403 19:27:44.436906   66718 logs.go:282] 0 containers: []
	W0403 19:27:44.436916   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:44.436930   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:44.436994   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:44.470497   66718 cri.go:89] found id: ""
	I0403 19:27:44.470528   66718 logs.go:282] 0 containers: []
	W0403 19:27:44.470539   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:44.470549   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:44.470562   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:44.506991   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:44.507020   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:44.561636   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:44.561664   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:44.575474   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:44.575500   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:44.651228   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:44.651249   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:44.651264   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:47.247976   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:47.261863   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:47.261956   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:47.299778   66718 cri.go:89] found id: ""
	I0403 19:27:47.299800   66718 logs.go:282] 0 containers: []
	W0403 19:27:47.299809   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:47.299816   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:47.299873   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:47.337455   66718 cri.go:89] found id: ""
	I0403 19:27:47.337491   66718 logs.go:282] 0 containers: []
	W0403 19:27:47.337501   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:47.337508   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:47.337566   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:47.380578   66718 cri.go:89] found id: ""
	I0403 19:27:47.380610   66718 logs.go:282] 0 containers: []
	W0403 19:27:47.380621   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:47.380628   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:47.380691   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:47.419513   66718 cri.go:89] found id: ""
	I0403 19:27:47.419552   66718 logs.go:282] 0 containers: []
	W0403 19:27:47.419565   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:47.419572   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:47.419635   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:47.457093   66718 cri.go:89] found id: ""
	I0403 19:27:47.457116   66718 logs.go:282] 0 containers: []
	W0403 19:27:47.457126   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:47.457133   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:47.457195   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:47.488487   66718 cri.go:89] found id: ""
	I0403 19:27:47.488519   66718 logs.go:282] 0 containers: []
	W0403 19:27:47.488528   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:47.488533   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:47.488588   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:47.523080   66718 cri.go:89] found id: ""
	I0403 19:27:47.523104   66718 logs.go:282] 0 containers: []
	W0403 19:27:47.523111   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:47.523117   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:47.523175   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:47.558073   66718 cri.go:89] found id: ""
	I0403 19:27:47.558099   66718 logs.go:282] 0 containers: []
	W0403 19:27:47.558110   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:47.558120   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:47.558134   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:47.595118   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:47.595147   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:47.642087   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:47.642123   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:47.655882   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:47.655916   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:47.720448   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:47.720473   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:47.720488   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:50.302971   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:50.320348   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:50.320426   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:50.365261   66718 cri.go:89] found id: ""
	I0403 19:27:50.365287   66718 logs.go:282] 0 containers: []
	W0403 19:27:50.365299   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:50.365307   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:50.365359   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:50.402583   66718 cri.go:89] found id: ""
	I0403 19:27:50.402613   66718 logs.go:282] 0 containers: []
	W0403 19:27:50.402622   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:50.402630   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:50.402686   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:50.438534   66718 cri.go:89] found id: ""
	I0403 19:27:50.438565   66718 logs.go:282] 0 containers: []
	W0403 19:27:50.438575   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:50.438582   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:50.438639   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:50.476026   66718 cri.go:89] found id: ""
	I0403 19:27:50.476052   66718 logs.go:282] 0 containers: []
	W0403 19:27:50.476064   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:50.476072   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:50.476122   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:50.527658   66718 cri.go:89] found id: ""
	I0403 19:27:50.527688   66718 logs.go:282] 0 containers: []
	W0403 19:27:50.527705   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:50.527713   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:50.527776   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:50.573159   66718 cri.go:89] found id: ""
	I0403 19:27:50.573185   66718 logs.go:282] 0 containers: []
	W0403 19:27:50.573193   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:50.573201   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:50.573257   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:50.611236   66718 cri.go:89] found id: ""
	I0403 19:27:50.611268   66718 logs.go:282] 0 containers: []
	W0403 19:27:50.611280   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:50.611287   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:50.611352   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:50.645449   66718 cri.go:89] found id: ""
	I0403 19:27:50.645480   66718 logs.go:282] 0 containers: []
	W0403 19:27:50.645491   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:50.645502   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:50.645518   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:50.658703   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:50.658732   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:50.730512   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:50.730539   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:50.730554   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:50.832213   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:50.832263   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:50.881206   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:50.881243   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:53.444031   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:53.457224   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:53.457283   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:53.492276   66718 cri.go:89] found id: ""
	I0403 19:27:53.492305   66718 logs.go:282] 0 containers: []
	W0403 19:27:53.492316   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:53.492323   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:53.492381   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:53.528638   66718 cri.go:89] found id: ""
	I0403 19:27:53.528666   66718 logs.go:282] 0 containers: []
	W0403 19:27:53.528677   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:53.528687   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:53.528750   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:53.562581   66718 cri.go:89] found id: ""
	I0403 19:27:53.562609   66718 logs.go:282] 0 containers: []
	W0403 19:27:53.562619   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:53.562627   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:53.562685   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:53.594881   66718 cri.go:89] found id: ""
	I0403 19:27:53.594911   66718 logs.go:282] 0 containers: []
	W0403 19:27:53.594935   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:53.594942   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:53.595002   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:53.628191   66718 cri.go:89] found id: ""
	I0403 19:27:53.628213   66718 logs.go:282] 0 containers: []
	W0403 19:27:53.628219   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:53.628225   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:53.628270   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:53.662913   66718 cri.go:89] found id: ""
	I0403 19:27:53.662942   66718 logs.go:282] 0 containers: []
	W0403 19:27:53.662952   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:53.662959   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:53.663028   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:53.707759   66718 cri.go:89] found id: ""
	I0403 19:27:53.707784   66718 logs.go:282] 0 containers: []
	W0403 19:27:53.707792   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:53.707798   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:53.707850   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:53.746313   66718 cri.go:89] found id: ""
	I0403 19:27:53.746334   66718 logs.go:282] 0 containers: []
	W0403 19:27:53.746341   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:53.746349   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:53.746361   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:53.759406   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:53.759441   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:53.837064   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:53.837088   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:53.837101   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:53.928888   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:53.928923   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:53.970493   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:53.970515   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:56.524879   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:56.539031   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:56.539111   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:56.576435   66718 cri.go:89] found id: ""
	I0403 19:27:56.576460   66718 logs.go:282] 0 containers: []
	W0403 19:27:56.576467   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:56.576473   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:56.576527   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:56.614298   66718 cri.go:89] found id: ""
	I0403 19:27:56.614334   66718 logs.go:282] 0 containers: []
	W0403 19:27:56.614345   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:56.614352   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:56.614411   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:56.653869   66718 cri.go:89] found id: ""
	I0403 19:27:56.653898   66718 logs.go:282] 0 containers: []
	W0403 19:27:56.653908   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:56.653917   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:56.654011   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:56.685979   66718 cri.go:89] found id: ""
	I0403 19:27:56.686011   66718 logs.go:282] 0 containers: []
	W0403 19:27:56.686022   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:56.686029   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:56.686085   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:56.722403   66718 cri.go:89] found id: ""
	I0403 19:27:56.722443   66718 logs.go:282] 0 containers: []
	W0403 19:27:56.722456   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:56.722465   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:56.722525   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:56.760574   66718 cri.go:89] found id: ""
	I0403 19:27:56.760601   66718 logs.go:282] 0 containers: []
	W0403 19:27:56.760609   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:56.760615   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:56.760669   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:56.794844   66718 cri.go:89] found id: ""
	I0403 19:27:56.794870   66718 logs.go:282] 0 containers: []
	W0403 19:27:56.794880   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:56.794887   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:56.794944   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:56.828251   66718 cri.go:89] found id: ""
	I0403 19:27:56.828283   66718 logs.go:282] 0 containers: []
	W0403 19:27:56.828294   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:56.828306   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:56.828318   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:27:56.882262   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:27:56.882304   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:27:56.906600   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:27:56.906631   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:27:57.029394   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:27:57.029423   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:27:57.029439   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:27:57.112060   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:27:57.112091   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:27:59.653483   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:27:59.670229   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:27:59.670304   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:27:59.705866   66718 cri.go:89] found id: ""
	I0403 19:27:59.705895   66718 logs.go:282] 0 containers: []
	W0403 19:27:59.705906   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:27:59.705913   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:27:59.705972   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:27:59.744448   66718 cri.go:89] found id: ""
	I0403 19:27:59.744468   66718 logs.go:282] 0 containers: []
	W0403 19:27:59.744475   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:27:59.744480   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:27:59.744536   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:27:59.779867   66718 cri.go:89] found id: ""
	I0403 19:27:59.779900   66718 logs.go:282] 0 containers: []
	W0403 19:27:59.779911   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:27:59.779918   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:27:59.779969   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:27:59.818461   66718 cri.go:89] found id: ""
	I0403 19:27:59.818486   66718 logs.go:282] 0 containers: []
	W0403 19:27:59.818496   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:27:59.818503   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:27:59.818561   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:27:59.856037   66718 cri.go:89] found id: ""
	I0403 19:27:59.856070   66718 logs.go:282] 0 containers: []
	W0403 19:27:59.856081   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:27:59.856089   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:27:59.856149   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:27:59.891505   66718 cri.go:89] found id: ""
	I0403 19:27:59.891533   66718 logs.go:282] 0 containers: []
	W0403 19:27:59.891543   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:27:59.891553   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:27:59.891609   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:27:59.926277   66718 cri.go:89] found id: ""
	I0403 19:27:59.926310   66718 logs.go:282] 0 containers: []
	W0403 19:27:59.926321   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:27:59.926328   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:27:59.926381   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:27:59.975203   66718 cri.go:89] found id: ""
	I0403 19:27:59.975234   66718 logs.go:282] 0 containers: []
	W0403 19:27:59.975244   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:27:59.975253   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:27:59.975264   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:00.045628   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:00.045667   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:00.059002   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:00.059030   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:00.122722   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:00.122753   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:00.122768   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:00.197342   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:00.197379   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:02.736561   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:02.754941   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:02.755015   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:02.804225   66718 cri.go:89] found id: ""
	I0403 19:28:02.804251   66718 logs.go:282] 0 containers: []
	W0403 19:28:02.804262   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:02.804270   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:02.804330   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:02.857614   66718 cri.go:89] found id: ""
	I0403 19:28:02.857644   66718 logs.go:282] 0 containers: []
	W0403 19:28:02.857652   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:02.857657   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:02.857705   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:02.907804   66718 cri.go:89] found id: ""
	I0403 19:28:02.907832   66718 logs.go:282] 0 containers: []
	W0403 19:28:02.907840   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:02.907846   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:02.907901   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:02.941545   66718 cri.go:89] found id: ""
	I0403 19:28:02.941567   66718 logs.go:282] 0 containers: []
	W0403 19:28:02.941575   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:02.941581   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:02.941636   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:02.978541   66718 cri.go:89] found id: ""
	I0403 19:28:02.978568   66718 logs.go:282] 0 containers: []
	W0403 19:28:02.978580   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:02.978587   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:02.978647   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:03.014124   66718 cri.go:89] found id: ""
	I0403 19:28:03.014152   66718 logs.go:282] 0 containers: []
	W0403 19:28:03.014162   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:03.014168   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:03.014231   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:03.055062   66718 cri.go:89] found id: ""
	I0403 19:28:03.055089   66718 logs.go:282] 0 containers: []
	W0403 19:28:03.055097   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:03.055102   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:03.055147   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:03.097647   66718 cri.go:89] found id: ""
	I0403 19:28:03.097672   66718 logs.go:282] 0 containers: []
	W0403 19:28:03.097683   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:03.097693   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:03.097708   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:03.178228   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:03.178251   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:03.178264   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:03.260006   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:03.260040   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:03.303915   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:03.303956   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:03.360683   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:03.360716   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:05.878507   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:05.893372   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:05.893496   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:05.930586   66718 cri.go:89] found id: ""
	I0403 19:28:05.930608   66718 logs.go:282] 0 containers: []
	W0403 19:28:05.930616   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:05.930622   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:05.930665   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:05.969020   66718 cri.go:89] found id: ""
	I0403 19:28:05.969042   66718 logs.go:282] 0 containers: []
	W0403 19:28:05.969050   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:05.969056   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:05.969100   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:06.005085   66718 cri.go:89] found id: ""
	I0403 19:28:06.005112   66718 logs.go:282] 0 containers: []
	W0403 19:28:06.005121   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:06.005126   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:06.005174   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:06.035025   66718 cri.go:89] found id: ""
	I0403 19:28:06.035052   66718 logs.go:282] 0 containers: []
	W0403 19:28:06.035062   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:06.035069   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:06.035123   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:06.072129   66718 cri.go:89] found id: ""
	I0403 19:28:06.072159   66718 logs.go:282] 0 containers: []
	W0403 19:28:06.072167   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:06.072173   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:06.072258   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:06.111447   66718 cri.go:89] found id: ""
	I0403 19:28:06.111475   66718 logs.go:282] 0 containers: []
	W0403 19:28:06.111485   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:06.111492   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:06.111540   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:06.147568   66718 cri.go:89] found id: ""
	I0403 19:28:06.147603   66718 logs.go:282] 0 containers: []
	W0403 19:28:06.147614   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:06.147625   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:06.147688   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:06.181249   66718 cri.go:89] found id: ""
	I0403 19:28:06.181271   66718 logs.go:282] 0 containers: []
	W0403 19:28:06.181278   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:06.181288   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:06.181301   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:06.218434   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:06.218459   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:06.268069   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:06.268106   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:06.280924   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:06.280947   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:06.346844   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:06.346872   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:06.346887   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:08.930801   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:08.943220   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:08.943282   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:08.978061   66718 cri.go:89] found id: ""
	I0403 19:28:08.978088   66718 logs.go:282] 0 containers: []
	W0403 19:28:08.978099   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:08.978106   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:08.978166   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:09.014335   66718 cri.go:89] found id: ""
	I0403 19:28:09.014364   66718 logs.go:282] 0 containers: []
	W0403 19:28:09.014373   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:09.014378   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:09.014435   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:09.048730   66718 cri.go:89] found id: ""
	I0403 19:28:09.048758   66718 logs.go:282] 0 containers: []
	W0403 19:28:09.048769   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:09.048777   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:09.048837   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:09.081126   66718 cri.go:89] found id: ""
	I0403 19:28:09.081149   66718 logs.go:282] 0 containers: []
	W0403 19:28:09.081159   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:09.081165   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:09.081210   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:09.119064   66718 cri.go:89] found id: ""
	I0403 19:28:09.119094   66718 logs.go:282] 0 containers: []
	W0403 19:28:09.119104   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:09.119115   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:09.119180   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:09.152377   66718 cri.go:89] found id: ""
	I0403 19:28:09.152401   66718 logs.go:282] 0 containers: []
	W0403 19:28:09.152409   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:09.152415   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:09.152465   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:09.184552   66718 cri.go:89] found id: ""
	I0403 19:28:09.184575   66718 logs.go:282] 0 containers: []
	W0403 19:28:09.184584   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:09.184593   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:09.184646   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:09.216801   66718 cri.go:89] found id: ""
	I0403 19:28:09.216833   66718 logs.go:282] 0 containers: []
	W0403 19:28:09.216845   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:09.216855   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:09.216867   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:09.229584   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:09.229610   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:09.291662   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:09.291687   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:09.291706   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:09.364551   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:09.364583   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:09.402749   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:09.402786   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:11.956048   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:11.968464   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:11.968525   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:12.002453   66718 cri.go:89] found id: ""
	I0403 19:28:12.002477   66718 logs.go:282] 0 containers: []
	W0403 19:28:12.002485   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:12.002490   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:12.002550   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:12.036868   66718 cri.go:89] found id: ""
	I0403 19:28:12.036898   66718 logs.go:282] 0 containers: []
	W0403 19:28:12.036909   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:12.036917   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:12.036991   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:12.069441   66718 cri.go:89] found id: ""
	I0403 19:28:12.069475   66718 logs.go:282] 0 containers: []
	W0403 19:28:12.069484   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:12.069489   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:12.069536   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:12.101696   66718 cri.go:89] found id: ""
	I0403 19:28:12.101724   66718 logs.go:282] 0 containers: []
	W0403 19:28:12.101734   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:12.101741   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:12.101805   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:12.135756   66718 cri.go:89] found id: ""
	I0403 19:28:12.135781   66718 logs.go:282] 0 containers: []
	W0403 19:28:12.135789   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:12.135794   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:12.135859   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:12.168563   66718 cri.go:89] found id: ""
	I0403 19:28:12.168588   66718 logs.go:282] 0 containers: []
	W0403 19:28:12.168598   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:12.168605   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:12.168665   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:12.201759   66718 cri.go:89] found id: ""
	I0403 19:28:12.201797   66718 logs.go:282] 0 containers: []
	W0403 19:28:12.201809   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:12.201817   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:12.201879   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:12.234682   66718 cri.go:89] found id: ""
	I0403 19:28:12.234713   66718 logs.go:282] 0 containers: []
	W0403 19:28:12.234722   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:12.234736   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:12.234750   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:12.309167   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:12.309197   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:12.309213   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:12.382645   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:12.382677   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:12.427468   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:12.427500   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:12.481203   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:12.481236   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:14.994799   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:15.007082   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:15.007188   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:15.040848   66718 cri.go:89] found id: ""
	I0403 19:28:15.040876   66718 logs.go:282] 0 containers: []
	W0403 19:28:15.040887   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:15.040894   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:15.040953   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:15.072197   66718 cri.go:89] found id: ""
	I0403 19:28:15.072222   66718 logs.go:282] 0 containers: []
	W0403 19:28:15.072232   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:15.072240   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:15.072297   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:15.110511   66718 cri.go:89] found id: ""
	I0403 19:28:15.110539   66718 logs.go:282] 0 containers: []
	W0403 19:28:15.110549   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:15.110557   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:15.110621   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:15.146397   66718 cri.go:89] found id: ""
	I0403 19:28:15.146424   66718 logs.go:282] 0 containers: []
	W0403 19:28:15.146434   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:15.146441   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:15.146494   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:15.182278   66718 cri.go:89] found id: ""
	I0403 19:28:15.182307   66718 logs.go:282] 0 containers: []
	W0403 19:28:15.182318   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:15.182325   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:15.182380   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:15.213836   66718 cri.go:89] found id: ""
	I0403 19:28:15.213871   66718 logs.go:282] 0 containers: []
	W0403 19:28:15.213888   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:15.213896   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:15.213958   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:15.246584   66718 cri.go:89] found id: ""
	I0403 19:28:15.246612   66718 logs.go:282] 0 containers: []
	W0403 19:28:15.246622   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:15.246629   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:15.246683   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:15.278030   66718 cri.go:89] found id: ""
	I0403 19:28:15.278062   66718 logs.go:282] 0 containers: []
	W0403 19:28:15.278072   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:15.278083   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:15.278096   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:15.313181   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:15.313210   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:15.371493   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:15.371523   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:15.387740   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:15.387770   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:15.475060   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:15.475087   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:15.475103   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:18.090267   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:18.106599   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:18.106657   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:18.143016   66718 cri.go:89] found id: ""
	I0403 19:28:18.143046   66718 logs.go:282] 0 containers: []
	W0403 19:28:18.143053   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:18.143060   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:18.143123   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:18.174535   66718 cri.go:89] found id: ""
	I0403 19:28:18.174567   66718 logs.go:282] 0 containers: []
	W0403 19:28:18.174577   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:18.174582   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:18.174627   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:18.208001   66718 cri.go:89] found id: ""
	I0403 19:28:18.208025   66718 logs.go:282] 0 containers: []
	W0403 19:28:18.208032   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:18.208037   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:18.208083   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:18.244733   66718 cri.go:89] found id: ""
	I0403 19:28:18.244761   66718 logs.go:282] 0 containers: []
	W0403 19:28:18.244772   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:18.244781   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:18.244832   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:18.280518   66718 cri.go:89] found id: ""
	I0403 19:28:18.280549   66718 logs.go:282] 0 containers: []
	W0403 19:28:18.280559   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:18.280566   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:18.280625   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:18.318588   66718 cri.go:89] found id: ""
	I0403 19:28:18.318613   66718 logs.go:282] 0 containers: []
	W0403 19:28:18.318624   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:18.318631   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:18.318744   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:18.355811   66718 cri.go:89] found id: ""
	I0403 19:28:18.355843   66718 logs.go:282] 0 containers: []
	W0403 19:28:18.355853   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:18.355859   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:18.355928   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:18.389200   66718 cri.go:89] found id: ""
	I0403 19:28:18.389224   66718 logs.go:282] 0 containers: []
	W0403 19:28:18.389234   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:18.389245   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:18.389260   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:18.443512   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:18.443557   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:18.458884   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:18.458908   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:18.528556   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:18.528576   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:18.528588   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:18.615342   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:18.615374   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:21.156474   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:21.169393   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:21.169471   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:21.211831   66718 cri.go:89] found id: ""
	I0403 19:28:21.211857   66718 logs.go:282] 0 containers: []
	W0403 19:28:21.211868   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:21.211876   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:21.211968   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:21.244921   66718 cri.go:89] found id: ""
	I0403 19:28:21.244958   66718 logs.go:282] 0 containers: []
	W0403 19:28:21.244970   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:21.244980   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:21.245054   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:21.282115   66718 cri.go:89] found id: ""
	I0403 19:28:21.282144   66718 logs.go:282] 0 containers: []
	W0403 19:28:21.282154   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:21.282162   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:21.282220   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:21.315130   66718 cri.go:89] found id: ""
	I0403 19:28:21.315151   66718 logs.go:282] 0 containers: []
	W0403 19:28:21.315158   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:21.315163   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:21.315219   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:21.349643   66718 cri.go:89] found id: ""
	I0403 19:28:21.349673   66718 logs.go:282] 0 containers: []
	W0403 19:28:21.349685   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:21.349693   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:21.349754   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:21.382916   66718 cri.go:89] found id: ""
	I0403 19:28:21.382943   66718 logs.go:282] 0 containers: []
	W0403 19:28:21.382952   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:21.382960   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:21.383029   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:21.416105   66718 cri.go:89] found id: ""
	I0403 19:28:21.416133   66718 logs.go:282] 0 containers: []
	W0403 19:28:21.416143   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:21.416150   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:21.416221   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:21.451372   66718 cri.go:89] found id: ""
	I0403 19:28:21.451405   66718 logs.go:282] 0 containers: []
	W0403 19:28:21.451418   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:21.451429   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:21.451442   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:21.489432   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:21.489461   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:21.540372   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:21.540404   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:21.553078   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:21.553105   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:21.627647   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:21.627671   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:21.627685   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:24.209207   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:24.222327   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:24.222392   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:24.257822   66718 cri.go:89] found id: ""
	I0403 19:28:24.257850   66718 logs.go:282] 0 containers: []
	W0403 19:28:24.257861   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:24.257868   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:24.257924   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:24.300030   66718 cri.go:89] found id: ""
	I0403 19:28:24.300049   66718 logs.go:282] 0 containers: []
	W0403 19:28:24.300056   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:24.300060   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:24.300114   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:24.332615   66718 cri.go:89] found id: ""
	I0403 19:28:24.332644   66718 logs.go:282] 0 containers: []
	W0403 19:28:24.332651   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:24.332656   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:24.332708   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:24.367266   66718 cri.go:89] found id: ""
	I0403 19:28:24.367291   66718 logs.go:282] 0 containers: []
	W0403 19:28:24.367298   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:24.367303   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:24.367352   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:24.404329   66718 cri.go:89] found id: ""
	I0403 19:28:24.404352   66718 logs.go:282] 0 containers: []
	W0403 19:28:24.404359   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:24.404364   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:24.404453   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:24.443506   66718 cri.go:89] found id: ""
	I0403 19:28:24.443537   66718 logs.go:282] 0 containers: []
	W0403 19:28:24.443560   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:24.443567   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:24.443638   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:24.477263   66718 cri.go:89] found id: ""
	I0403 19:28:24.477297   66718 logs.go:282] 0 containers: []
	W0403 19:28:24.477307   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:24.477315   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:24.477366   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:24.513535   66718 cri.go:89] found id: ""
	I0403 19:28:24.513557   66718 logs.go:282] 0 containers: []
	W0403 19:28:24.513564   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:24.513572   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:24.513581   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:24.594743   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:24.594784   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:24.642345   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:24.642377   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:24.692114   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:24.692145   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:24.707356   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:24.707391   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:24.772339   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:27.272938   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:27.291257   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:27.291341   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:27.328784   66718 cri.go:89] found id: ""
	I0403 19:28:27.328816   66718 logs.go:282] 0 containers: []
	W0403 19:28:27.328827   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:27.328835   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:27.328905   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:27.366161   66718 cri.go:89] found id: ""
	I0403 19:28:27.366192   66718 logs.go:282] 0 containers: []
	W0403 19:28:27.366201   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:27.366205   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:27.366272   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:27.401514   66718 cri.go:89] found id: ""
	I0403 19:28:27.401549   66718 logs.go:282] 0 containers: []
	W0403 19:28:27.401562   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:27.401572   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:27.401641   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:27.444768   66718 cri.go:89] found id: ""
	I0403 19:28:27.444799   66718 logs.go:282] 0 containers: []
	W0403 19:28:27.444810   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:27.444817   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:27.444877   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:27.483768   66718 cri.go:89] found id: ""
	I0403 19:28:27.483800   66718 logs.go:282] 0 containers: []
	W0403 19:28:27.483809   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:27.483816   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:27.483874   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:27.520740   66718 cri.go:89] found id: ""
	I0403 19:28:27.520768   66718 logs.go:282] 0 containers: []
	W0403 19:28:27.520781   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:27.520788   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:27.520853   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:27.555629   66718 cri.go:89] found id: ""
	I0403 19:28:27.555658   66718 logs.go:282] 0 containers: []
	W0403 19:28:27.555668   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:27.555676   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:27.555734   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:27.591563   66718 cri.go:89] found id: ""
	I0403 19:28:27.591590   66718 logs.go:282] 0 containers: []
	W0403 19:28:27.591600   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:27.591610   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:27.591624   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:27.643759   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:27.643793   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:27.661531   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:27.661557   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:27.729815   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:27.729849   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:27.729864   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:27.816032   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:27.816079   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:30.357256   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:30.370161   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:30.370222   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:30.408612   66718 cri.go:89] found id: ""
	I0403 19:28:30.408645   66718 logs.go:282] 0 containers: []
	W0403 19:28:30.408657   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:30.408665   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:30.408720   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:30.443424   66718 cri.go:89] found id: ""
	I0403 19:28:30.443445   66718 logs.go:282] 0 containers: []
	W0403 19:28:30.443452   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:30.443457   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:30.443500   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:30.475625   66718 cri.go:89] found id: ""
	I0403 19:28:30.475647   66718 logs.go:282] 0 containers: []
	W0403 19:28:30.475654   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:30.475660   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:30.475702   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:30.507602   66718 cri.go:89] found id: ""
	I0403 19:28:30.507623   66718 logs.go:282] 0 containers: []
	W0403 19:28:30.507630   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:30.507636   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:30.507691   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:30.545335   66718 cri.go:89] found id: ""
	I0403 19:28:30.545362   66718 logs.go:282] 0 containers: []
	W0403 19:28:30.545371   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:30.545376   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:30.545453   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:30.577590   66718 cri.go:89] found id: ""
	I0403 19:28:30.577618   66718 logs.go:282] 0 containers: []
	W0403 19:28:30.577626   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:30.577632   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:30.577684   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:30.613552   66718 cri.go:89] found id: ""
	I0403 19:28:30.613577   66718 logs.go:282] 0 containers: []
	W0403 19:28:30.613588   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:30.613595   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:30.613652   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:30.644976   66718 cri.go:89] found id: ""
	I0403 19:28:30.645007   66718 logs.go:282] 0 containers: []
	W0403 19:28:30.645017   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:30.645028   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:30.645047   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:30.695686   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:30.695719   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:30.709146   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:30.709177   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:30.771248   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:30.771274   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:30.771289   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:30.852116   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:30.852152   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:33.391371   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:33.403574   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:33.403640   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:33.439599   66718 cri.go:89] found id: ""
	I0403 19:28:33.439626   66718 logs.go:282] 0 containers: []
	W0403 19:28:33.439636   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:33.439644   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:33.439701   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:33.472539   66718 cri.go:89] found id: ""
	I0403 19:28:33.472561   66718 logs.go:282] 0 containers: []
	W0403 19:28:33.472568   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:33.472574   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:33.472626   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:33.507690   66718 cri.go:89] found id: ""
	I0403 19:28:33.507721   66718 logs.go:282] 0 containers: []
	W0403 19:28:33.507732   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:33.507739   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:33.507796   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:33.542786   66718 cri.go:89] found id: ""
	I0403 19:28:33.542811   66718 logs.go:282] 0 containers: []
	W0403 19:28:33.542830   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:33.542839   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:33.542887   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:33.574742   66718 cri.go:89] found id: ""
	I0403 19:28:33.574767   66718 logs.go:282] 0 containers: []
	W0403 19:28:33.574773   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:33.574779   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:33.574847   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:33.607023   66718 cri.go:89] found id: ""
	I0403 19:28:33.607048   66718 logs.go:282] 0 containers: []
	W0403 19:28:33.607055   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:33.607061   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:33.607111   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:33.638785   66718 cri.go:89] found id: ""
	I0403 19:28:33.638807   66718 logs.go:282] 0 containers: []
	W0403 19:28:33.638814   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:33.638828   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:33.638874   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:33.671307   66718 cri.go:89] found id: ""
	I0403 19:28:33.671332   66718 logs.go:282] 0 containers: []
	W0403 19:28:33.671340   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:33.671348   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:33.671362   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:33.684283   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:33.684312   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:33.746670   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:33.746692   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:33.746703   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:33.822828   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:33.822857   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:33.862436   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:33.862460   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:36.412117   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:36.424328   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:36.424387   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:36.458133   66718 cri.go:89] found id: ""
	I0403 19:28:36.458163   66718 logs.go:282] 0 containers: []
	W0403 19:28:36.458171   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:36.458176   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:36.458223   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:36.491394   66718 cri.go:89] found id: ""
	I0403 19:28:36.491424   66718 logs.go:282] 0 containers: []
	W0403 19:28:36.491435   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:36.491441   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:36.491492   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:36.531835   66718 cri.go:89] found id: ""
	I0403 19:28:36.531864   66718 logs.go:282] 0 containers: []
	W0403 19:28:36.531872   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:36.531877   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:36.531937   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:36.566084   66718 cri.go:89] found id: ""
	I0403 19:28:36.566108   66718 logs.go:282] 0 containers: []
	W0403 19:28:36.566115   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:36.566121   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:36.566178   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:36.603144   66718 cri.go:89] found id: ""
	I0403 19:28:36.603180   66718 logs.go:282] 0 containers: []
	W0403 19:28:36.603188   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:36.603194   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:36.603245   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:36.634990   66718 cri.go:89] found id: ""
	I0403 19:28:36.635016   66718 logs.go:282] 0 containers: []
	W0403 19:28:36.635023   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:36.635029   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:36.635076   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:36.668641   66718 cri.go:89] found id: ""
	I0403 19:28:36.668676   66718 logs.go:282] 0 containers: []
	W0403 19:28:36.668687   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:36.668695   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:36.668752   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:36.701166   66718 cri.go:89] found id: ""
	I0403 19:28:36.701197   66718 logs.go:282] 0 containers: []
	W0403 19:28:36.701204   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:36.701214   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:36.701224   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:36.750281   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:36.750314   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:36.762733   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:36.762764   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:36.827479   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:36.827505   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:36.827518   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:36.902601   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:36.902638   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:39.439284   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:39.453194   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:39.453262   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:39.486224   66718 cri.go:89] found id: ""
	I0403 19:28:39.486254   66718 logs.go:282] 0 containers: []
	W0403 19:28:39.486264   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:39.486272   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:39.486334   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:39.517470   66718 cri.go:89] found id: ""
	I0403 19:28:39.517499   66718 logs.go:282] 0 containers: []
	W0403 19:28:39.517508   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:39.517513   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:39.517559   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:39.548041   66718 cri.go:89] found id: ""
	I0403 19:28:39.548070   66718 logs.go:282] 0 containers: []
	W0403 19:28:39.548080   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:39.548087   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:39.548146   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:39.582606   66718 cri.go:89] found id: ""
	I0403 19:28:39.582638   66718 logs.go:282] 0 containers: []
	W0403 19:28:39.582647   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:39.582654   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:39.582710   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:39.614029   66718 cri.go:89] found id: ""
	I0403 19:28:39.614060   66718 logs.go:282] 0 containers: []
	W0403 19:28:39.614071   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:39.614077   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:39.614134   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:39.645253   66718 cri.go:89] found id: ""
	I0403 19:28:39.645287   66718 logs.go:282] 0 containers: []
	W0403 19:28:39.645294   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:39.645300   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:39.645354   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:39.675811   66718 cri.go:89] found id: ""
	I0403 19:28:39.675833   66718 logs.go:282] 0 containers: []
	W0403 19:28:39.675840   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:39.675846   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:39.675897   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:39.709656   66718 cri.go:89] found id: ""
	I0403 19:28:39.709681   66718 logs.go:282] 0 containers: []
	W0403 19:28:39.709689   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:39.709697   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:39.709709   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:39.742975   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:39.743001   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:39.791513   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:39.791539   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:39.804415   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:39.804440   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:39.878176   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:39.878205   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:39.878216   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:42.457172   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:42.471081   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:42.471138   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:42.507767   66718 cri.go:89] found id: ""
	I0403 19:28:42.507838   66718 logs.go:282] 0 containers: []
	W0403 19:28:42.507855   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:42.507862   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:42.507919   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:42.543104   66718 cri.go:89] found id: ""
	I0403 19:28:42.543135   66718 logs.go:282] 0 containers: []
	W0403 19:28:42.543145   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:42.543153   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:42.543219   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:42.580738   66718 cri.go:89] found id: ""
	I0403 19:28:42.580761   66718 logs.go:282] 0 containers: []
	W0403 19:28:42.580768   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:42.580773   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:42.580821   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:42.612274   66718 cri.go:89] found id: ""
	I0403 19:28:42.612303   66718 logs.go:282] 0 containers: []
	W0403 19:28:42.612315   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:42.612322   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:42.612378   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:42.647406   66718 cri.go:89] found id: ""
	I0403 19:28:42.647441   66718 logs.go:282] 0 containers: []
	W0403 19:28:42.647451   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:42.647460   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:42.647524   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:42.689126   66718 cri.go:89] found id: ""
	I0403 19:28:42.689165   66718 logs.go:282] 0 containers: []
	W0403 19:28:42.689173   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:42.689179   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:42.689236   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:42.739009   66718 cri.go:89] found id: ""
	I0403 19:28:42.739036   66718 logs.go:282] 0 containers: []
	W0403 19:28:42.739044   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:42.739050   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:42.739114   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:42.780151   66718 cri.go:89] found id: ""
	I0403 19:28:42.780178   66718 logs.go:282] 0 containers: []
	W0403 19:28:42.780193   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:42.780216   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:42.780244   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:42.794800   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:42.794844   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:42.873654   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:42.873681   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:42.873695   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:42.964764   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:42.964800   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:43.005076   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:43.005100   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:45.570354   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:45.588156   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:45.588229   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:45.637191   66718 cri.go:89] found id: ""
	I0403 19:28:45.637220   66718 logs.go:282] 0 containers: []
	W0403 19:28:45.637227   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:45.637233   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:45.637285   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:45.677525   66718 cri.go:89] found id: ""
	I0403 19:28:45.677552   66718 logs.go:282] 0 containers: []
	W0403 19:28:45.677563   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:45.677569   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:45.677628   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:45.734001   66718 cri.go:89] found id: ""
	I0403 19:28:45.734027   66718 logs.go:282] 0 containers: []
	W0403 19:28:45.734036   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:45.734043   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:45.734103   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:45.774491   66718 cri.go:89] found id: ""
	I0403 19:28:45.774512   66718 logs.go:282] 0 containers: []
	W0403 19:28:45.774520   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:45.774525   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:45.774581   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:45.821689   66718 cri.go:89] found id: ""
	I0403 19:28:45.821711   66718 logs.go:282] 0 containers: []
	W0403 19:28:45.821717   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:45.821723   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:45.821777   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:45.853058   66718 cri.go:89] found id: ""
	I0403 19:28:45.853083   66718 logs.go:282] 0 containers: []
	W0403 19:28:45.853090   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:45.853104   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:45.853162   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:45.885413   66718 cri.go:89] found id: ""
	I0403 19:28:45.885442   66718 logs.go:282] 0 containers: []
	W0403 19:28:45.885452   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:45.885460   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:45.885521   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:45.921432   66718 cri.go:89] found id: ""
	I0403 19:28:45.921468   66718 logs.go:282] 0 containers: []
	W0403 19:28:45.921479   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:45.921491   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:45.921504   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:45.935854   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:45.935886   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:46.024468   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:46.024493   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:46.024508   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:46.119825   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:46.119860   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:46.167872   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:46.167905   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:48.747218   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:48.759693   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:48.759756   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:48.803436   66718 cri.go:89] found id: ""
	I0403 19:28:48.803457   66718 logs.go:282] 0 containers: []
	W0403 19:28:48.803464   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:48.803469   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:48.803513   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:48.843361   66718 cri.go:89] found id: ""
	I0403 19:28:48.843389   66718 logs.go:282] 0 containers: []
	W0403 19:28:48.843400   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:48.843408   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:48.843468   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:48.879611   66718 cri.go:89] found id: ""
	I0403 19:28:48.879645   66718 logs.go:282] 0 containers: []
	W0403 19:28:48.879656   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:48.879663   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:48.879733   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:48.917827   66718 cri.go:89] found id: ""
	I0403 19:28:48.917856   66718 logs.go:282] 0 containers: []
	W0403 19:28:48.917867   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:48.917874   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:48.917929   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:48.955817   66718 cri.go:89] found id: ""
	I0403 19:28:48.955850   66718 logs.go:282] 0 containers: []
	W0403 19:28:48.955862   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:48.955868   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:48.955915   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:48.994111   66718 cri.go:89] found id: ""
	I0403 19:28:48.994207   66718 logs.go:282] 0 containers: []
	W0403 19:28:48.994227   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:48.994235   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:48.994297   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:49.026842   66718 cri.go:89] found id: ""
	I0403 19:28:49.026880   66718 logs.go:282] 0 containers: []
	W0403 19:28:49.026893   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:49.026902   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:49.026966   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:49.060374   66718 cri.go:89] found id: ""
	I0403 19:28:49.060408   66718 logs.go:282] 0 containers: []
	W0403 19:28:49.060418   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:49.060429   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:49.060446   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:49.174889   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:49.174930   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:49.250935   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:49.250959   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:49.318942   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:49.318971   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:49.337271   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:49.337294   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:49.432485   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:51.933144   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:51.947204   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:51.947288   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:51.984793   66718 cri.go:89] found id: ""
	I0403 19:28:51.984822   66718 logs.go:282] 0 containers: []
	W0403 19:28:51.984833   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:51.984841   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:51.984903   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:52.016518   66718 cri.go:89] found id: ""
	I0403 19:28:52.016546   66718 logs.go:282] 0 containers: []
	W0403 19:28:52.016556   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:52.016564   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:52.016620   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:52.050460   66718 cri.go:89] found id: ""
	I0403 19:28:52.050484   66718 logs.go:282] 0 containers: []
	W0403 19:28:52.050495   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:52.050501   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:52.050555   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:52.089529   66718 cri.go:89] found id: ""
	I0403 19:28:52.089557   66718 logs.go:282] 0 containers: []
	W0403 19:28:52.089567   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:52.089574   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:52.089637   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:52.126798   66718 cri.go:89] found id: ""
	I0403 19:28:52.126856   66718 logs.go:282] 0 containers: []
	W0403 19:28:52.126868   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:52.126875   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:52.126934   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:52.165774   66718 cri.go:89] found id: ""
	I0403 19:28:52.165802   66718 logs.go:282] 0 containers: []
	W0403 19:28:52.165813   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:52.165821   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:52.165883   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:52.201326   66718 cri.go:89] found id: ""
	I0403 19:28:52.201348   66718 logs.go:282] 0 containers: []
	W0403 19:28:52.201357   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:52.201364   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:52.201422   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:52.238867   66718 cri.go:89] found id: ""
	I0403 19:28:52.238892   66718 logs.go:282] 0 containers: []
	W0403 19:28:52.238902   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:52.238914   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:52.238929   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:52.297674   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:52.297704   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:52.310865   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:52.310900   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:52.382754   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:52.382779   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:52.382797   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:52.461374   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:52.461407   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:55.003397   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:55.024521   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:55.024590   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:55.083863   66718 cri.go:89] found id: ""
	I0403 19:28:55.083887   66718 logs.go:282] 0 containers: []
	W0403 19:28:55.083897   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:55.083905   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:55.083982   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:55.160443   66718 cri.go:89] found id: ""
	I0403 19:28:55.160472   66718 logs.go:282] 0 containers: []
	W0403 19:28:55.160481   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:55.160488   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:55.160550   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:55.197667   66718 cri.go:89] found id: ""
	I0403 19:28:55.197697   66718 logs.go:282] 0 containers: []
	W0403 19:28:55.197706   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:55.197714   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:55.197777   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:55.236630   66718 cri.go:89] found id: ""
	I0403 19:28:55.236673   66718 logs.go:282] 0 containers: []
	W0403 19:28:55.236684   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:55.236692   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:55.236769   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:55.278237   66718 cri.go:89] found id: ""
	I0403 19:28:55.278352   66718 logs.go:282] 0 containers: []
	W0403 19:28:55.278397   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:55.278421   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:55.278491   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:55.324214   66718 cri.go:89] found id: ""
	I0403 19:28:55.324248   66718 logs.go:282] 0 containers: []
	W0403 19:28:55.324258   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:55.324265   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:55.324328   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:55.370081   66718 cri.go:89] found id: ""
	I0403 19:28:55.370114   66718 logs.go:282] 0 containers: []
	W0403 19:28:55.370126   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:55.370135   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:55.370199   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:55.414655   66718 cri.go:89] found id: ""
	I0403 19:28:55.414740   66718 logs.go:282] 0 containers: []
	W0403 19:28:55.414764   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:55.414785   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:55.414842   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:55.503014   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:55.503046   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:55.542467   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:55.542495   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:28:55.609121   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:55.609162   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:55.623208   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:55.623237   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:55.692792   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:58.193463   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:28:58.206323   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:28:58.206401   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:28:58.243640   66718 cri.go:89] found id: ""
	I0403 19:28:58.243673   66718 logs.go:282] 0 containers: []
	W0403 19:28:58.243684   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:28:58.243691   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:28:58.243747   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:28:58.282108   66718 cri.go:89] found id: ""
	I0403 19:28:58.282133   66718 logs.go:282] 0 containers: []
	W0403 19:28:58.282142   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:28:58.282150   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:28:58.282207   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:28:58.317657   66718 cri.go:89] found id: ""
	I0403 19:28:58.317687   66718 logs.go:282] 0 containers: []
	W0403 19:28:58.317699   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:28:58.317710   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:28:58.317773   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:28:58.361262   66718 cri.go:89] found id: ""
	I0403 19:28:58.361293   66718 logs.go:282] 0 containers: []
	W0403 19:28:58.361304   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:28:58.361319   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:28:58.361380   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:28:58.405697   66718 cri.go:89] found id: ""
	I0403 19:28:58.405718   66718 logs.go:282] 0 containers: []
	W0403 19:28:58.405724   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:28:58.405730   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:28:58.405789   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:28:58.442290   66718 cri.go:89] found id: ""
	I0403 19:28:58.442320   66718 logs.go:282] 0 containers: []
	W0403 19:28:58.442331   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:28:58.442339   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:28:58.442403   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:28:58.481761   66718 cri.go:89] found id: ""
	I0403 19:28:58.481788   66718 logs.go:282] 0 containers: []
	W0403 19:28:58.481799   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:28:58.481810   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:28:58.481870   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:28:58.518212   66718 cri.go:89] found id: ""
	I0403 19:28:58.518237   66718 logs.go:282] 0 containers: []
	W0403 19:28:58.518244   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:28:58.518252   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:28:58.518262   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:28:58.531951   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:28:58.531985   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:28:58.619525   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:28:58.619550   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:28:58.619564   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:28:58.697116   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:28:58.697152   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:28:58.737533   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:28:58.737576   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:01.297206   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:01.314865   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:01.314935   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:01.356249   66718 cri.go:89] found id: ""
	I0403 19:29:01.356277   66718 logs.go:282] 0 containers: []
	W0403 19:29:01.356287   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:01.356294   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:01.356349   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:01.392114   66718 cri.go:89] found id: ""
	I0403 19:29:01.392142   66718 logs.go:282] 0 containers: []
	W0403 19:29:01.392152   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:01.392160   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:01.392214   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:01.424237   66718 cri.go:89] found id: ""
	I0403 19:29:01.424264   66718 logs.go:282] 0 containers: []
	W0403 19:29:01.424274   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:01.424281   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:01.424338   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:01.457892   66718 cri.go:89] found id: ""
	I0403 19:29:01.457931   66718 logs.go:282] 0 containers: []
	W0403 19:29:01.457943   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:01.457951   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:01.458006   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:01.492680   66718 cri.go:89] found id: ""
	I0403 19:29:01.492708   66718 logs.go:282] 0 containers: []
	W0403 19:29:01.492717   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:01.492724   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:01.492784   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:01.528072   66718 cri.go:89] found id: ""
	I0403 19:29:01.528101   66718 logs.go:282] 0 containers: []
	W0403 19:29:01.528110   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:01.528117   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:01.528176   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:01.561098   66718 cri.go:89] found id: ""
	I0403 19:29:01.561131   66718 logs.go:282] 0 containers: []
	W0403 19:29:01.561142   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:01.561149   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:01.561218   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:01.596584   66718 cri.go:89] found id: ""
	I0403 19:29:01.596608   66718 logs.go:282] 0 containers: []
	W0403 19:29:01.596615   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:01.596624   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:01.596633   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:01.685139   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:01.685169   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:01.724432   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:01.724461   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:01.800916   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:01.800954   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:01.814328   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:01.814356   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:01.891133   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:04.392044   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:04.410216   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:04.410300   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:04.464533   66718 cri.go:89] found id: ""
	I0403 19:29:04.464563   66718 logs.go:282] 0 containers: []
	W0403 19:29:04.464574   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:04.464615   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:04.464684   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:04.517287   66718 cri.go:89] found id: ""
	I0403 19:29:04.517315   66718 logs.go:282] 0 containers: []
	W0403 19:29:04.517325   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:04.517334   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:04.517399   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:04.571106   66718 cri.go:89] found id: ""
	I0403 19:29:04.571133   66718 logs.go:282] 0 containers: []
	W0403 19:29:04.571144   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:04.571151   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:04.571216   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:04.619771   66718 cri.go:89] found id: ""
	I0403 19:29:04.619810   66718 logs.go:282] 0 containers: []
	W0403 19:29:04.619821   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:04.619829   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:04.619893   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:04.664692   66718 cri.go:89] found id: ""
	I0403 19:29:04.664719   66718 logs.go:282] 0 containers: []
	W0403 19:29:04.664728   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:04.664736   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:04.664787   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:04.705432   66718 cri.go:89] found id: ""
	I0403 19:29:04.705457   66718 logs.go:282] 0 containers: []
	W0403 19:29:04.705467   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:04.705475   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:04.705523   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:04.744480   66718 cri.go:89] found id: ""
	I0403 19:29:04.744506   66718 logs.go:282] 0 containers: []
	W0403 19:29:04.744516   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:04.744523   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:04.744581   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:04.788563   66718 cri.go:89] found id: ""
	I0403 19:29:04.788589   66718 logs.go:282] 0 containers: []
	W0403 19:29:04.788597   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:04.788606   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:04.788618   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:04.882577   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:04.882603   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:04.882617   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:05.001729   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:05.001784   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:05.052746   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:05.052770   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:05.128873   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:05.128975   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:07.647996   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:07.666110   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:07.666179   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:07.705524   66718 cri.go:89] found id: ""
	I0403 19:29:07.705553   66718 logs.go:282] 0 containers: []
	W0403 19:29:07.705565   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:07.705574   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:07.705634   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:07.749372   66718 cri.go:89] found id: ""
	I0403 19:29:07.749400   66718 logs.go:282] 0 containers: []
	W0403 19:29:07.749409   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:07.749417   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:07.749474   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:07.793367   66718 cri.go:89] found id: ""
	I0403 19:29:07.793390   66718 logs.go:282] 0 containers: []
	W0403 19:29:07.793400   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:07.793406   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:07.793456   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:07.833814   66718 cri.go:89] found id: ""
	I0403 19:29:07.833843   66718 logs.go:282] 0 containers: []
	W0403 19:29:07.833854   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:07.833861   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:07.833942   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:07.878030   66718 cri.go:89] found id: ""
	I0403 19:29:07.878056   66718 logs.go:282] 0 containers: []
	W0403 19:29:07.878067   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:07.878073   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:07.878129   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:07.922473   66718 cri.go:89] found id: ""
	I0403 19:29:07.922496   66718 logs.go:282] 0 containers: []
	W0403 19:29:07.922503   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:07.922510   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:07.922575   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:07.969436   66718 cri.go:89] found id: ""
	I0403 19:29:07.969461   66718 logs.go:282] 0 containers: []
	W0403 19:29:07.969472   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:07.969478   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:07.969537   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:08.007042   66718 cri.go:89] found id: ""
	I0403 19:29:08.007068   66718 logs.go:282] 0 containers: []
	W0403 19:29:08.007080   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:08.007091   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:08.007105   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:08.020667   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:08.020706   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:08.089342   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:08.089365   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:08.089377   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:08.168043   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:08.168076   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:08.204065   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:08.204092   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:10.760670   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:10.774364   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:10.774441   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:10.814914   66718 cri.go:89] found id: ""
	I0403 19:29:10.814942   66718 logs.go:282] 0 containers: []
	W0403 19:29:10.814951   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:10.814958   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:10.815020   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:10.851035   66718 cri.go:89] found id: ""
	I0403 19:29:10.851061   66718 logs.go:282] 0 containers: []
	W0403 19:29:10.851071   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:10.851079   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:10.851133   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:10.884338   66718 cri.go:89] found id: ""
	I0403 19:29:10.884367   66718 logs.go:282] 0 containers: []
	W0403 19:29:10.884376   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:10.884383   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:10.884440   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:10.922027   66718 cri.go:89] found id: ""
	I0403 19:29:10.922052   66718 logs.go:282] 0 containers: []
	W0403 19:29:10.922061   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:10.922067   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:10.922112   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:10.954616   66718 cri.go:89] found id: ""
	I0403 19:29:10.954643   66718 logs.go:282] 0 containers: []
	W0403 19:29:10.954651   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:10.954656   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:10.954713   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:10.988838   66718 cri.go:89] found id: ""
	I0403 19:29:10.988867   66718 logs.go:282] 0 containers: []
	W0403 19:29:10.988878   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:10.988885   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:10.988956   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:11.032873   66718 cri.go:89] found id: ""
	I0403 19:29:11.032901   66718 logs.go:282] 0 containers: []
	W0403 19:29:11.032914   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:11.032919   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:11.032978   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:11.072464   66718 cri.go:89] found id: ""
	I0403 19:29:11.072494   66718 logs.go:282] 0 containers: []
	W0403 19:29:11.072504   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:11.072514   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:11.072530   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:11.144919   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:11.144955   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:11.158929   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:11.158969   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:11.230050   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:11.230075   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:11.230089   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:11.304920   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:11.304953   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:13.845020   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:13.860146   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:13.860220   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:13.895806   66718 cri.go:89] found id: ""
	I0403 19:29:13.895836   66718 logs.go:282] 0 containers: []
	W0403 19:29:13.895847   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:13.895854   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:13.895902   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:13.935816   66718 cri.go:89] found id: ""
	I0403 19:29:13.935848   66718 logs.go:282] 0 containers: []
	W0403 19:29:13.935857   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:13.935862   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:13.935917   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:13.972436   66718 cri.go:89] found id: ""
	I0403 19:29:13.972464   66718 logs.go:282] 0 containers: []
	W0403 19:29:13.972475   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:13.972482   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:13.972537   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:14.005451   66718 cri.go:89] found id: ""
	I0403 19:29:14.005481   66718 logs.go:282] 0 containers: []
	W0403 19:29:14.005493   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:14.005502   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:14.005556   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:14.052414   66718 cri.go:89] found id: ""
	I0403 19:29:14.052445   66718 logs.go:282] 0 containers: []
	W0403 19:29:14.052456   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:14.052464   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:14.052522   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:14.086539   66718 cri.go:89] found id: ""
	I0403 19:29:14.086560   66718 logs.go:282] 0 containers: []
	W0403 19:29:14.086567   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:14.086573   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:14.086615   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:14.119009   66718 cri.go:89] found id: ""
	I0403 19:29:14.119034   66718 logs.go:282] 0 containers: []
	W0403 19:29:14.119041   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:14.119046   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:14.119090   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:14.154118   66718 cri.go:89] found id: ""
	I0403 19:29:14.154150   66718 logs.go:282] 0 containers: []
	W0403 19:29:14.154160   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:14.154181   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:14.154194   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:14.205596   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:14.205628   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:14.220747   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:14.220789   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:14.284501   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:14.284538   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:14.284552   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:14.370351   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:14.370381   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:16.913256   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:16.927940   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:16.928008   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:16.963493   66718 cri.go:89] found id: ""
	I0403 19:29:16.963519   66718 logs.go:282] 0 containers: []
	W0403 19:29:16.963529   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:16.963537   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:16.963603   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:16.996215   66718 cri.go:89] found id: ""
	I0403 19:29:16.996244   66718 logs.go:282] 0 containers: []
	W0403 19:29:16.996255   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:16.996262   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:16.996322   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:17.028723   66718 cri.go:89] found id: ""
	I0403 19:29:17.028751   66718 logs.go:282] 0 containers: []
	W0403 19:29:17.028761   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:17.028768   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:17.028829   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:17.062418   66718 cri.go:89] found id: ""
	I0403 19:29:17.062440   66718 logs.go:282] 0 containers: []
	W0403 19:29:17.062448   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:17.062453   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:17.062508   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:17.099859   66718 cri.go:89] found id: ""
	I0403 19:29:17.099885   66718 logs.go:282] 0 containers: []
	W0403 19:29:17.099895   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:17.099902   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:17.099974   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:17.132546   66718 cri.go:89] found id: ""
	I0403 19:29:17.132580   66718 logs.go:282] 0 containers: []
	W0403 19:29:17.132593   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:17.132604   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:17.132665   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:17.166813   66718 cri.go:89] found id: ""
	I0403 19:29:17.166859   66718 logs.go:282] 0 containers: []
	W0403 19:29:17.166874   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:17.166881   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:17.166945   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:17.205329   66718 cri.go:89] found id: ""
	I0403 19:29:17.205358   66718 logs.go:282] 0 containers: []
	W0403 19:29:17.205368   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:17.205379   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:17.205393   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:17.290232   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:17.290269   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:17.339224   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:17.339252   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:17.399962   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:17.399996   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:17.412829   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:17.412856   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:17.480264   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:19.980475   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:19.993187   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:19.993253   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:20.027474   66718 cri.go:89] found id: ""
	I0403 19:29:20.027504   66718 logs.go:282] 0 containers: []
	W0403 19:29:20.027515   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:20.027523   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:20.027580   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:20.060617   66718 cri.go:89] found id: ""
	I0403 19:29:20.060645   66718 logs.go:282] 0 containers: []
	W0403 19:29:20.060656   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:20.060663   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:20.060723   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:20.092288   66718 cri.go:89] found id: ""
	I0403 19:29:20.092313   66718 logs.go:282] 0 containers: []
	W0403 19:29:20.092320   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:20.092334   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:20.092391   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:20.124651   66718 cri.go:89] found id: ""
	I0403 19:29:20.124681   66718 logs.go:282] 0 containers: []
	W0403 19:29:20.124691   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:20.124698   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:20.124756   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:20.162098   66718 cri.go:89] found id: ""
	I0403 19:29:20.162125   66718 logs.go:282] 0 containers: []
	W0403 19:29:20.162136   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:20.162144   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:20.162199   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:20.196892   66718 cri.go:89] found id: ""
	I0403 19:29:20.196912   66718 logs.go:282] 0 containers: []
	W0403 19:29:20.196920   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:20.196926   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:20.196980   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:20.237815   66718 cri.go:89] found id: ""
	I0403 19:29:20.237841   66718 logs.go:282] 0 containers: []
	W0403 19:29:20.237849   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:20.237854   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:20.237910   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:20.270898   66718 cri.go:89] found id: ""
	I0403 19:29:20.270925   66718 logs.go:282] 0 containers: []
	W0403 19:29:20.270934   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:20.270944   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:20.270958   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:20.284390   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:20.284414   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:20.357808   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:20.357836   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:20.357853   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:20.433417   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:20.433448   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:20.478452   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:20.478486   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:23.045621   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:23.061944   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:23.062020   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:23.102059   66718 cri.go:89] found id: ""
	I0403 19:29:23.102089   66718 logs.go:282] 0 containers: []
	W0403 19:29:23.102099   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:23.102106   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:23.102166   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:23.137313   66718 cri.go:89] found id: ""
	I0403 19:29:23.137340   66718 logs.go:282] 0 containers: []
	W0403 19:29:23.137350   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:23.137355   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:23.137398   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:23.173691   66718 cri.go:89] found id: ""
	I0403 19:29:23.173718   66718 logs.go:282] 0 containers: []
	W0403 19:29:23.173728   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:23.173734   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:23.173793   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:23.209143   66718 cri.go:89] found id: ""
	I0403 19:29:23.209172   66718 logs.go:282] 0 containers: []
	W0403 19:29:23.209183   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:23.209190   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:23.209270   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:23.247112   66718 cri.go:89] found id: ""
	I0403 19:29:23.247138   66718 logs.go:282] 0 containers: []
	W0403 19:29:23.247148   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:23.247155   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:23.247210   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:23.286201   66718 cri.go:89] found id: ""
	I0403 19:29:23.286226   66718 logs.go:282] 0 containers: []
	W0403 19:29:23.286236   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:23.286244   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:23.286303   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:23.322630   66718 cri.go:89] found id: ""
	I0403 19:29:23.322649   66718 logs.go:282] 0 containers: []
	W0403 19:29:23.322656   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:23.322661   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:23.322717   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:23.360847   66718 cri.go:89] found id: ""
	I0403 19:29:23.360877   66718 logs.go:282] 0 containers: []
	W0403 19:29:23.360884   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:23.360893   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:23.360907   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:23.414875   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:23.414906   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:23.430254   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:23.430284   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:23.526437   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:23.526462   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:23.526476   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:23.607232   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:23.607268   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:26.149051   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:26.167565   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:26.167633   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:26.211221   66718 cri.go:89] found id: ""
	I0403 19:29:26.211250   66718 logs.go:282] 0 containers: []
	W0403 19:29:26.211261   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:26.211267   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:26.211334   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:26.253625   66718 cri.go:89] found id: ""
	I0403 19:29:26.253651   66718 logs.go:282] 0 containers: []
	W0403 19:29:26.253658   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:26.253664   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:26.253725   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:26.286880   66718 cri.go:89] found id: ""
	I0403 19:29:26.286906   66718 logs.go:282] 0 containers: []
	W0403 19:29:26.286914   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:26.286923   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:26.286983   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:26.328303   66718 cri.go:89] found id: ""
	I0403 19:29:26.328378   66718 logs.go:282] 0 containers: []
	W0403 19:29:26.328391   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:26.328399   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:26.328458   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:26.367248   66718 cri.go:89] found id: ""
	I0403 19:29:26.367279   66718 logs.go:282] 0 containers: []
	W0403 19:29:26.367296   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:26.367307   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:26.367367   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:26.413164   66718 cri.go:89] found id: ""
	I0403 19:29:26.413193   66718 logs.go:282] 0 containers: []
	W0403 19:29:26.413203   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:26.413211   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:26.413271   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:26.447772   66718 cri.go:89] found id: ""
	I0403 19:29:26.447814   66718 logs.go:282] 0 containers: []
	W0403 19:29:26.447825   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:26.447832   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:26.447892   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:26.481507   66718 cri.go:89] found id: ""
	I0403 19:29:26.481529   66718 logs.go:282] 0 containers: []
	W0403 19:29:26.481536   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:26.481544   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:26.481554   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:26.519302   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:26.519332   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:26.587715   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:26.587752   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:26.603962   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:26.603998   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:26.681789   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:26.681814   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:26.681829   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:29.272563   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:29.286814   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:29.286902   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:29.323944   66718 cri.go:89] found id: ""
	I0403 19:29:29.323974   66718 logs.go:282] 0 containers: []
	W0403 19:29:29.323984   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:29.323991   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:29.324048   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:29.361700   66718 cri.go:89] found id: ""
	I0403 19:29:29.361723   66718 logs.go:282] 0 containers: []
	W0403 19:29:29.361732   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:29.361739   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:29.361794   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:29.400894   66718 cri.go:89] found id: ""
	I0403 19:29:29.400917   66718 logs.go:282] 0 containers: []
	W0403 19:29:29.400927   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:29.400934   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:29.400990   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:29.435995   66718 cri.go:89] found id: ""
	I0403 19:29:29.436018   66718 logs.go:282] 0 containers: []
	W0403 19:29:29.436029   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:29.436035   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:29.436096   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:29.469086   66718 cri.go:89] found id: ""
	I0403 19:29:29.469114   66718 logs.go:282] 0 containers: []
	W0403 19:29:29.469121   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:29.469128   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:29.469192   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:29.502672   66718 cri.go:89] found id: ""
	I0403 19:29:29.502693   66718 logs.go:282] 0 containers: []
	W0403 19:29:29.502702   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:29.502708   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:29.502762   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:29.533405   66718 cri.go:89] found id: ""
	I0403 19:29:29.533431   66718 logs.go:282] 0 containers: []
	W0403 19:29:29.533441   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:29.533449   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:29.533501   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:29.566805   66718 cri.go:89] found id: ""
	I0403 19:29:29.566843   66718 logs.go:282] 0 containers: []
	W0403 19:29:29.566861   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:29.566872   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:29.566888   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:29.634467   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:29.634484   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:29.634495   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:29.720009   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:29.720041   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:29.759897   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:29.759930   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:29.811843   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:29.811870   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:32.324624   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:32.340763   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:32.340826   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:32.373608   66718 cri.go:89] found id: ""
	I0403 19:29:32.373640   66718 logs.go:282] 0 containers: []
	W0403 19:29:32.373650   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:32.373660   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:32.373717   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:32.403441   66718 cri.go:89] found id: ""
	I0403 19:29:32.403468   66718 logs.go:282] 0 containers: []
	W0403 19:29:32.403475   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:32.403481   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:32.403548   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:32.436755   66718 cri.go:89] found id: ""
	I0403 19:29:32.436783   66718 logs.go:282] 0 containers: []
	W0403 19:29:32.436790   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:32.436795   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:32.436842   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:32.468473   66718 cri.go:89] found id: ""
	I0403 19:29:32.468499   66718 logs.go:282] 0 containers: []
	W0403 19:29:32.468508   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:32.468515   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:32.468566   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:32.499641   66718 cri.go:89] found id: ""
	I0403 19:29:32.499667   66718 logs.go:282] 0 containers: []
	W0403 19:29:32.499677   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:32.499684   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:32.499744   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:32.530478   66718 cri.go:89] found id: ""
	I0403 19:29:32.530501   66718 logs.go:282] 0 containers: []
	W0403 19:29:32.530509   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:32.530516   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:32.530580   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:32.562151   66718 cri.go:89] found id: ""
	I0403 19:29:32.562177   66718 logs.go:282] 0 containers: []
	W0403 19:29:32.562186   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:32.562194   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:32.562252   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:32.591682   66718 cri.go:89] found id: ""
	I0403 19:29:32.591711   66718 logs.go:282] 0 containers: []
	W0403 19:29:32.591723   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:32.591734   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:32.591746   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:32.673081   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:32.673139   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:32.711152   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:32.711183   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:32.761207   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:32.761237   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:32.773647   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:32.773668   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:32.837406   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:35.338534   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:35.355906   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:35.355957   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:35.388464   66718 cri.go:89] found id: ""
	I0403 19:29:35.388486   66718 logs.go:282] 0 containers: []
	W0403 19:29:35.388492   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:35.388498   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:35.388543   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:35.424814   66718 cri.go:89] found id: ""
	I0403 19:29:35.424840   66718 logs.go:282] 0 containers: []
	W0403 19:29:35.424850   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:35.424857   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:35.424914   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:35.465875   66718 cri.go:89] found id: ""
	I0403 19:29:35.465913   66718 logs.go:282] 0 containers: []
	W0403 19:29:35.465923   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:35.465938   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:35.466004   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:35.504298   66718 cri.go:89] found id: ""
	I0403 19:29:35.504324   66718 logs.go:282] 0 containers: []
	W0403 19:29:35.504335   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:35.504343   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:35.504392   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:35.539258   66718 cri.go:89] found id: ""
	I0403 19:29:35.539286   66718 logs.go:282] 0 containers: []
	W0403 19:29:35.539297   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:35.539304   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:35.539373   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:35.568849   66718 cri.go:89] found id: ""
	I0403 19:29:35.568870   66718 logs.go:282] 0 containers: []
	W0403 19:29:35.568877   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:35.568882   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:35.568925   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:35.599023   66718 cri.go:89] found id: ""
	I0403 19:29:35.599044   66718 logs.go:282] 0 containers: []
	W0403 19:29:35.599051   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:35.599060   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:35.599116   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:35.632159   66718 cri.go:89] found id: ""
	I0403 19:29:35.632178   66718 logs.go:282] 0 containers: []
	W0403 19:29:35.632185   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:35.632202   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:35.632226   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:35.682161   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:35.682189   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:35.695982   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:35.696009   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:35.761775   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:35.761802   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:35.761817   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:35.855200   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:35.855230   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:38.420730   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:38.438305   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:38.438374   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:38.483111   66718 cri.go:89] found id: ""
	I0403 19:29:38.483141   66718 logs.go:282] 0 containers: []
	W0403 19:29:38.483152   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:38.483159   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:38.483228   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:38.534931   66718 cri.go:89] found id: ""
	I0403 19:29:38.534960   66718 logs.go:282] 0 containers: []
	W0403 19:29:38.534978   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:38.534986   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:38.535052   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:38.579422   66718 cri.go:89] found id: ""
	I0403 19:29:38.579452   66718 logs.go:282] 0 containers: []
	W0403 19:29:38.579463   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:38.579471   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:38.579534   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:38.621489   66718 cri.go:89] found id: ""
	I0403 19:29:38.621521   66718 logs.go:282] 0 containers: []
	W0403 19:29:38.621532   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:38.621540   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:38.621602   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:38.666125   66718 cri.go:89] found id: ""
	I0403 19:29:38.666150   66718 logs.go:282] 0 containers: []
	W0403 19:29:38.666161   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:38.666168   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:38.666227   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:38.713728   66718 cri.go:89] found id: ""
	I0403 19:29:38.713755   66718 logs.go:282] 0 containers: []
	W0403 19:29:38.713765   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:38.713773   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:38.713830   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:38.753338   66718 cri.go:89] found id: ""
	I0403 19:29:38.753366   66718 logs.go:282] 0 containers: []
	W0403 19:29:38.753375   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:38.753381   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:38.753432   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:38.789075   66718 cri.go:89] found id: ""
	I0403 19:29:38.789104   66718 logs.go:282] 0 containers: []
	W0403 19:29:38.789114   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:38.789124   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:38.789139   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:38.850067   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:38.850101   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:38.864283   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:38.864305   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:38.938634   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:38.938650   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:38.938661   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:39.032837   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:39.032869   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:41.582146   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:41.595844   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:41.595901   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:41.646092   66718 cri.go:89] found id: ""
	I0403 19:29:41.646124   66718 logs.go:282] 0 containers: []
	W0403 19:29:41.646136   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:41.646143   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:41.646202   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:41.691947   66718 cri.go:89] found id: ""
	I0403 19:29:41.691977   66718 logs.go:282] 0 containers: []
	W0403 19:29:41.691987   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:41.692005   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:41.692053   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:41.743004   66718 cri.go:89] found id: ""
	I0403 19:29:41.743038   66718 logs.go:282] 0 containers: []
	W0403 19:29:41.743049   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:41.743057   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:41.743113   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:41.785914   66718 cri.go:89] found id: ""
	I0403 19:29:41.785939   66718 logs.go:282] 0 containers: []
	W0403 19:29:41.785949   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:41.785956   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:41.786007   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:41.826083   66718 cri.go:89] found id: ""
	I0403 19:29:41.826116   66718 logs.go:282] 0 containers: []
	W0403 19:29:41.826129   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:41.826137   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:41.826190   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:41.863886   66718 cri.go:89] found id: ""
	I0403 19:29:41.863917   66718 logs.go:282] 0 containers: []
	W0403 19:29:41.863928   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:41.863936   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:41.863998   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:41.902642   66718 cri.go:89] found id: ""
	I0403 19:29:41.902665   66718 logs.go:282] 0 containers: []
	W0403 19:29:41.902675   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:41.902681   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:41.902737   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:41.936762   66718 cri.go:89] found id: ""
	I0403 19:29:41.936791   66718 logs.go:282] 0 containers: []
	W0403 19:29:41.936801   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:41.936812   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:41.936851   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:41.993242   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:41.993277   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:42.010174   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:42.010203   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:42.077315   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:42.077338   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:42.077354   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:42.164888   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:42.164921   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:44.708565   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:44.720876   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:44.720946   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:44.751576   66718 cri.go:89] found id: ""
	I0403 19:29:44.751604   66718 logs.go:282] 0 containers: []
	W0403 19:29:44.751620   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:44.751627   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:44.751672   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:44.783809   66718 cri.go:89] found id: ""
	I0403 19:29:44.783841   66718 logs.go:282] 0 containers: []
	W0403 19:29:44.783851   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:44.783858   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:44.783923   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:44.814607   66718 cri.go:89] found id: ""
	I0403 19:29:44.814633   66718 logs.go:282] 0 containers: []
	W0403 19:29:44.814643   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:44.814650   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:44.814702   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:44.847720   66718 cri.go:89] found id: ""
	I0403 19:29:44.847748   66718 logs.go:282] 0 containers: []
	W0403 19:29:44.847757   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:44.847764   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:44.847821   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:44.884221   66718 cri.go:89] found id: ""
	I0403 19:29:44.884248   66718 logs.go:282] 0 containers: []
	W0403 19:29:44.884255   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:44.884260   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:44.884327   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:44.919461   66718 cri.go:89] found id: ""
	I0403 19:29:44.919486   66718 logs.go:282] 0 containers: []
	W0403 19:29:44.919493   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:44.919500   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:44.919565   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:44.954889   66718 cri.go:89] found id: ""
	I0403 19:29:44.954916   66718 logs.go:282] 0 containers: []
	W0403 19:29:44.954937   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:44.954944   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:44.955008   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:44.990453   66718 cri.go:89] found id: ""
	I0403 19:29:44.990478   66718 logs.go:282] 0 containers: []
	W0403 19:29:44.990485   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:44.990494   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:44.990503   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:45.046515   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:45.046546   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:45.061329   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:45.061365   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:45.150196   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:45.150223   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:45.150236   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:45.235268   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:45.235297   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:47.782935   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:47.800676   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:47.800748   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:47.854384   66718 cri.go:89] found id: ""
	I0403 19:29:47.854412   66718 logs.go:282] 0 containers: []
	W0403 19:29:47.854422   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:47.854429   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:47.854492   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:47.898583   66718 cri.go:89] found id: ""
	I0403 19:29:47.898615   66718 logs.go:282] 0 containers: []
	W0403 19:29:47.898625   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:47.898632   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:47.898691   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:47.947167   66718 cri.go:89] found id: ""
	I0403 19:29:47.947202   66718 logs.go:282] 0 containers: []
	W0403 19:29:47.947213   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:47.947220   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:47.947273   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:47.988698   66718 cri.go:89] found id: ""
	I0403 19:29:47.988722   66718 logs.go:282] 0 containers: []
	W0403 19:29:47.988732   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:47.988739   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:47.988792   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:48.052453   66718 cri.go:89] found id: ""
	I0403 19:29:48.052477   66718 logs.go:282] 0 containers: []
	W0403 19:29:48.052487   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:48.052494   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:48.052541   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:48.102595   66718 cri.go:89] found id: ""
	I0403 19:29:48.102613   66718 logs.go:282] 0 containers: []
	W0403 19:29:48.102620   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:48.102626   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:48.102674   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:48.148900   66718 cri.go:89] found id: ""
	I0403 19:29:48.148929   66718 logs.go:282] 0 containers: []
	W0403 19:29:48.148939   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:48.148949   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:48.149006   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:48.191646   66718 cri.go:89] found id: ""
	I0403 19:29:48.191672   66718 logs.go:282] 0 containers: []
	W0403 19:29:48.191683   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:48.191694   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:48.191707   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:48.213571   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:48.213615   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:48.320386   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:48.320413   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:48.320429   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:48.431765   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:48.431806   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:48.474157   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:48.474194   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:51.039162   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:51.051969   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:51.052050   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:51.086710   66718 cri.go:89] found id: ""
	I0403 19:29:51.086736   66718 logs.go:282] 0 containers: []
	W0403 19:29:51.086745   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:51.086753   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:51.086810   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:51.119730   66718 cri.go:89] found id: ""
	I0403 19:29:51.119758   66718 logs.go:282] 0 containers: []
	W0403 19:29:51.119768   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:51.119775   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:51.119830   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:51.154496   66718 cri.go:89] found id: ""
	I0403 19:29:51.154524   66718 logs.go:282] 0 containers: []
	W0403 19:29:51.154534   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:51.154541   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:51.154596   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:51.192742   66718 cri.go:89] found id: ""
	I0403 19:29:51.192770   66718 logs.go:282] 0 containers: []
	W0403 19:29:51.192781   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:51.192793   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:51.192854   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:51.227398   66718 cri.go:89] found id: ""
	I0403 19:29:51.227431   66718 logs.go:282] 0 containers: []
	W0403 19:29:51.227442   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:51.227449   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:51.227508   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:51.260616   66718 cri.go:89] found id: ""
	I0403 19:29:51.260637   66718 logs.go:282] 0 containers: []
	W0403 19:29:51.260644   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:51.260650   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:51.260709   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:51.294523   66718 cri.go:89] found id: ""
	I0403 19:29:51.294550   66718 logs.go:282] 0 containers: []
	W0403 19:29:51.294559   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:51.294566   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:51.294618   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:51.330761   66718 cri.go:89] found id: ""
	I0403 19:29:51.330787   66718 logs.go:282] 0 containers: []
	W0403 19:29:51.330796   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:51.330804   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:51.330813   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:51.393109   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:51.393129   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:51.393140   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:51.483806   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:51.483836   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:51.526765   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:51.526789   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:51.575933   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:51.575965   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:54.090756   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:54.102918   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:29:54.102978   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:29:54.139409   66718 cri.go:89] found id: ""
	I0403 19:29:54.139436   66718 logs.go:282] 0 containers: []
	W0403 19:29:54.139445   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:29:54.139450   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:29:54.139494   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:29:54.170865   66718 cri.go:89] found id: ""
	I0403 19:29:54.170892   66718 logs.go:282] 0 containers: []
	W0403 19:29:54.170902   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:29:54.170910   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:29:54.170959   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:29:54.208374   66718 cri.go:89] found id: ""
	I0403 19:29:54.208393   66718 logs.go:282] 0 containers: []
	W0403 19:29:54.208399   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:29:54.208404   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:29:54.208441   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:29:54.246019   66718 cri.go:89] found id: ""
	I0403 19:29:54.246037   66718 logs.go:282] 0 containers: []
	W0403 19:29:54.246043   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:29:54.246049   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:29:54.246092   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:29:54.284514   66718 cri.go:89] found id: ""
	I0403 19:29:54.284533   66718 logs.go:282] 0 containers: []
	W0403 19:29:54.284539   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:29:54.284545   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:29:54.284595   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:29:54.317351   66718 cri.go:89] found id: ""
	I0403 19:29:54.317373   66718 logs.go:282] 0 containers: []
	W0403 19:29:54.317380   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:29:54.317387   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:29:54.317434   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:29:54.350232   66718 cri.go:89] found id: ""
	I0403 19:29:54.350253   66718 logs.go:282] 0 containers: []
	W0403 19:29:54.350261   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:29:54.350274   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:29:54.350330   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:29:54.383555   66718 cri.go:89] found id: ""
	I0403 19:29:54.383582   66718 logs.go:282] 0 containers: []
	W0403 19:29:54.383591   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:29:54.383602   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:29:54.383616   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:29:54.437676   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:29:54.437706   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:29:54.450749   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:29:54.450779   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:29:54.515087   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:29:54.515105   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:29:54.515119   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:29:54.604529   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:29:54.604562   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0403 19:29:57.142961   66718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:29:57.155001   66718 kubeadm.go:597] duration metric: took 4m3.082485526s to restartPrimaryControlPlane
	W0403 19:29:57.155056   66718 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0403 19:29:57.155080   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0403 19:29:58.762362   66718 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.607261785s)
	I0403 19:29:58.762442   66718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:29:58.780716   66718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0403 19:29:58.793012   66718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:29:58.805188   66718 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 19:29:58.805204   66718 kubeadm.go:157] found existing configuration files:
	
	I0403 19:29:58.805245   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:29:58.815201   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 19:29:58.815250   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 19:29:58.825343   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:29:58.833880   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 19:29:58.833945   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 19:29:58.842950   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:29:58.851405   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 19:29:58.851453   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:29:58.861548   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:29:58.874390   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 19:29:58.874437   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:29:58.886707   66718 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0403 19:29:58.959639   66718 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0403 19:29:58.959776   66718 kubeadm.go:310] [preflight] Running pre-flight checks
	I0403 19:29:59.104564   66718 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0403 19:29:59.104700   66718 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0403 19:29:59.104854   66718 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0403 19:29:59.298279   66718 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0403 19:29:59.300333   66718 out.go:235]   - Generating certificates and keys ...
	I0403 19:29:59.300430   66718 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0403 19:29:59.300517   66718 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0403 19:29:59.300656   66718 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0403 19:29:59.300758   66718 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0403 19:29:59.300893   66718 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0403 19:29:59.300988   66718 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0403 19:29:59.301089   66718 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0403 19:29:59.301186   66718 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0403 19:29:59.301295   66718 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0403 19:29:59.301403   66718 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0403 19:29:59.301466   66718 kubeadm.go:310] [certs] Using the existing "sa" key
	I0403 19:29:59.301558   66718 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0403 19:29:59.370754   66718 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0403 19:29:59.595484   66718 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0403 19:29:59.992459   66718 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0403 19:30:00.213708   66718 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0403 19:30:00.236697   66718 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0403 19:30:00.239313   66718 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0403 19:30:00.239480   66718 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0403 19:30:00.434903   66718 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0403 19:30:00.436791   66718 out.go:235]   - Booting up control plane ...
	I0403 19:30:00.436916   66718 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0403 19:30:00.456697   66718 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0403 19:30:00.458283   66718 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0403 19:30:00.459364   66718 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0403 19:30:00.463032   66718 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0403 19:30:40.464129   66718 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0403 19:30:40.464951   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:30:40.465127   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:30:45.465575   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:30:45.465818   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:30:55.466251   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:30:55.466519   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:31:15.467635   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:31:15.467882   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:31:55.470044   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:31:55.470565   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:31:55.470590   66718 kubeadm.go:310] 
	I0403 19:31:55.470704   66718 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0403 19:31:55.470806   66718 kubeadm.go:310] 		timed out waiting for the condition
	I0403 19:31:55.470816   66718 kubeadm.go:310] 
	I0403 19:31:55.470902   66718 kubeadm.go:310] 	This error is likely caused by:
	I0403 19:31:55.470978   66718 kubeadm.go:310] 		- The kubelet is not running
	I0403 19:31:55.471223   66718 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0403 19:31:55.471239   66718 kubeadm.go:310] 
	I0403 19:31:55.471463   66718 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0403 19:31:55.471539   66718 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0403 19:31:55.471609   66718 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0403 19:31:55.471620   66718 kubeadm.go:310] 
	I0403 19:31:55.471843   66718 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0403 19:31:55.472022   66718 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0403 19:31:55.472038   66718 kubeadm.go:310] 
	I0403 19:31:55.472279   66718 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0403 19:31:55.472523   66718 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0403 19:31:55.472711   66718 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0403 19:31:55.472877   66718 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0403 19:31:55.472911   66718 kubeadm.go:310] 
	I0403 19:31:55.473175   66718 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:31:55.473378   66718 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0403 19:31:55.473668   66718 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0403 19:31:55.473805   66718 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0403 19:31:55.473871   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0403 19:31:56.323173   66718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:31:56.339274   66718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:31:56.353111   66718 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 19:31:56.353136   66718 kubeadm.go:157] found existing configuration files:
	
	I0403 19:31:56.353190   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:31:56.366363   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 19:31:56.366420   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 19:31:56.378964   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:31:56.392212   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 19:31:56.392289   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 19:31:56.403379   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:31:56.412508   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 19:31:56.412576   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:31:56.426798   66718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:31:56.440319   66718 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 19:31:56.440393   66718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:31:56.454417   66718 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0403 19:31:56.557966   66718 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0403 19:31:56.558104   66718 kubeadm.go:310] [preflight] Running pre-flight checks
	I0403 19:31:56.764576   66718 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0403 19:31:56.764722   66718 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0403 19:31:56.764856   66718 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0403 19:31:56.972563   66718 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0403 19:31:56.975433   66718 out.go:235]   - Generating certificates and keys ...
	I0403 19:31:56.975535   66718 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0403 19:31:56.975622   66718 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0403 19:31:56.975715   66718 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0403 19:31:56.975771   66718 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0403 19:31:56.975827   66718 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0403 19:31:56.975868   66718 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0403 19:31:56.975929   66718 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0403 19:31:56.975998   66718 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0403 19:31:56.976088   66718 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0403 19:31:56.976252   66718 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0403 19:31:56.976299   66718 kubeadm.go:310] [certs] Using the existing "sa" key
	I0403 19:31:56.976370   66718 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0403 19:31:57.092088   66718 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0403 19:31:57.301801   66718 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0403 19:31:57.775331   66718 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0403 19:31:58.075848   66718 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0403 19:31:58.098354   66718 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0403 19:31:58.101022   66718 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0403 19:31:58.101143   66718 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0403 19:31:58.311606   66718 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0403 19:31:58.313742   66718 out.go:235]   - Booting up control plane ...
	I0403 19:31:58.313995   66718 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0403 19:31:58.320108   66718 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0403 19:31:58.321621   66718 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0403 19:31:58.322714   66718 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0403 19:31:58.326341   66718 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0403 19:32:38.329718   66718 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0403 19:32:38.329999   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:32:38.330263   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:32:43.331050   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:32:43.331314   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:32:53.332203   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:32:53.332480   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:33:13.333186   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:33:13.333452   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:33:53.332014   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:33:53.332308   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:33:53.332328   66718 kubeadm.go:310] 
	I0403 19:33:53.332364   66718 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0403 19:33:53.332399   66718 kubeadm.go:310] 		timed out waiting for the condition
	I0403 19:33:53.332406   66718 kubeadm.go:310] 
	I0403 19:33:53.332435   66718 kubeadm.go:310] 	This error is likely caused by:
	I0403 19:33:53.332465   66718 kubeadm.go:310] 		- The kubelet is not running
	I0403 19:33:53.332560   66718 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0403 19:33:53.332566   66718 kubeadm.go:310] 
	I0403 19:33:53.332655   66718 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0403 19:33:53.332718   66718 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0403 19:33:53.332781   66718 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0403 19:33:53.332790   66718 kubeadm.go:310] 
	I0403 19:33:53.332922   66718 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0403 19:33:53.333025   66718 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0403 19:33:53.333033   66718 kubeadm.go:310] 
	I0403 19:33:53.333168   66718 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0403 19:33:53.333296   66718 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0403 19:33:53.333410   66718 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0403 19:33:53.333518   66718 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0403 19:33:53.333528   66718 kubeadm.go:310] 
	I0403 19:33:53.334367   66718 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:33:53.334492   66718 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0403 19:33:53.334554   66718 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0403 19:33:53.334604   66718 kubeadm.go:394] duration metric: took 7m59.310981648s to StartCluster
	I0403 19:33:53.334636   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:33:53.334685   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:33:53.373643   66718 cri.go:89] found id: ""
	I0403 19:33:53.373669   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.373682   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:33:53.373689   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:33:53.373736   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:33:53.403561   66718 cri.go:89] found id: ""
	I0403 19:33:53.403587   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.403595   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:33:53.403600   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:33:53.403655   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:33:53.433381   66718 cri.go:89] found id: ""
	I0403 19:33:53.433411   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.433420   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:33:53.433427   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:33:53.433480   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:33:53.464729   66718 cri.go:89] found id: ""
	I0403 19:33:53.464758   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.464769   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:33:53.464775   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:33:53.464843   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:33:53.495666   66718 cri.go:89] found id: ""
	I0403 19:33:53.495697   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.495708   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:33:53.495715   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:33:53.495782   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:33:53.527704   66718 cri.go:89] found id: ""
	I0403 19:33:53.527730   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.527739   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:33:53.527747   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:33:53.527804   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:33:53.567852   66718 cri.go:89] found id: ""
	I0403 19:33:53.567874   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.567881   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:33:53.567887   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:33:53.567943   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:33:53.597334   66718 cri.go:89] found id: ""
	I0403 19:33:53.597363   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.597374   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:33:53.597386   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:33:53.597399   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:33:53.653211   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:33:53.653246   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:33:53.666175   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:33:53.666201   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:33:53.736375   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:33:53.736397   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:33:53.736409   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:33:53.837412   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:33:53.837449   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0403 19:33:53.876433   66718 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0403 19:33:53.876481   66718 out.go:270] * 
	* 
	W0403 19:33:53.876533   66718 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:33:53.876547   66718 out.go:270] * 
	* 
	W0403 19:33:53.877616   66718 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0403 19:33:53.880186   66718 out.go:201] 
	W0403 19:33:53.881256   66718 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:33:53.881290   66718 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0403 19:33:53.881311   66718 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0403 19:33:53.882318   66718 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-471019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 2 (225.874024ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-471019 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005 sudo cat                | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005 sudo cat                | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p flannel-999005 pgrep -a                           | flannel-999005            | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005 sudo cat                | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-999005                         | enable-default-cni-999005 | jenkins | v1.35.0 | 03 Apr 25 19:33 UTC | 03 Apr 25 19:33 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 19:33:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 19:33:02.376869   77599 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:33:02.377092   77599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:33:02.377103   77599 out.go:358] Setting ErrFile to fd 2...
	I0403 19:33:02.377107   77599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:33:02.377328   77599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:33:02.378024   77599 out.go:352] Setting JSON to false
	I0403 19:33:02.379161   77599 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8127,"bootTime":1743700655,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:33:02.379239   77599 start.go:139] virtualization: kvm guest
	I0403 19:33:02.380689   77599 out.go:177] * [bridge-999005] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:33:02.382009   77599 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:33:02.382020   77599 notify.go:220] Checking for updates...
	I0403 19:33:02.384007   77599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:33:02.385169   77599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:33:02.386247   77599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:33:02.387253   77599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:33:02.388401   77599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:33:02.389846   77599 config.go:182] Loaded profile config "enable-default-cni-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:02.389947   77599 config.go:182] Loaded profile config "flannel-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:02.390028   77599 config.go:182] Loaded profile config "old-k8s-version-471019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0403 19:33:02.390112   77599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:33:02.427821   77599 out.go:177] * Using the kvm2 driver based on user configuration
	I0403 19:33:02.428964   77599 start.go:297] selected driver: kvm2
	I0403 19:33:02.428982   77599 start.go:901] validating driver "kvm2" against <nil>
	I0403 19:33:02.428993   77599 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:33:02.429643   77599 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:33:02.429716   77599 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 19:33:02.446281   77599 install.go:137] /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0403 19:33:02.446337   77599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0403 19:33:02.446713   77599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:33:02.446763   77599 cni.go:84] Creating CNI manager for "bridge"
	I0403 19:33:02.446772   77599 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0403 19:33:02.446854   77599 start.go:340] cluster config:
	{Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:33:02.446984   77599 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:33:02.448554   77599 out.go:177] * Starting "bridge-999005" primary control-plane node in "bridge-999005" cluster
	I0403 19:33:02.449622   77599 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 19:33:02.449667   77599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0403 19:33:02.449684   77599 cache.go:56] Caching tarball of preloaded images
	I0403 19:33:02.449752   77599 preload.go:172] Found /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0403 19:33:02.449762   77599 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0403 19:33:02.449877   77599 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/config.json ...
	I0403 19:33:02.449899   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/config.json: {Name:mk2379bf0104743094b5c7dde2a4c0ad0c4e9cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:02.450060   77599 start.go:360] acquireMachinesLock for bridge-999005: {Name:mk8972215f0ab94ca7966bf5adf18262e19bccd0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0403 19:33:02.450095   77599 start.go:364] duration metric: took 19.647µs to acquireMachinesLock for "bridge-999005"
	I0403 19:33:02.450116   77599 start.go:93] Provisioning new machine with config: &{Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-999005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:33:02.450176   77599 start.go:125] createHost starting for "" (driver="kvm2")
	I0403 19:32:59.278761   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:01.780343   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:04.220006   75819 kubeadm.go:310] [api-check] The API server is healthy after 5.502485937s
	I0403 19:33:04.234838   75819 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0403 19:33:04.249952   75819 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0403 19:33:04.281515   75819 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0403 19:33:04.281698   75819 kubeadm.go:310] [mark-control-plane] Marking the node flannel-999005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0403 19:33:04.292849   75819 kubeadm.go:310] [bootstrap-token] Using token: i2opuv.2m47nf28qphn3gfh
	I0403 19:33:04.294088   75819 out.go:235]   - Configuring RBAC rules ...
	I0403 19:33:04.294250   75819 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0403 19:33:04.298299   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0403 19:33:04.306023   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0403 19:33:04.310195   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0403 19:33:04.316560   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0403 19:33:04.323542   75819 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0403 19:33:04.627539   75819 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0403 19:33:05.059216   75819 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0403 19:33:05.626881   75819 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0403 19:33:05.627833   75819 kubeadm.go:310] 
	I0403 19:33:05.627927   75819 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0403 19:33:05.627938   75819 kubeadm.go:310] 
	I0403 19:33:05.628077   75819 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0403 19:33:05.628099   75819 kubeadm.go:310] 
	I0403 19:33:05.628132   75819 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0403 19:33:05.628211   75819 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0403 19:33:05.628291   75819 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0403 19:33:05.628302   75819 kubeadm.go:310] 
	I0403 19:33:05.628386   75819 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0403 19:33:05.628396   75819 kubeadm.go:310] 
	I0403 19:33:05.628464   75819 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0403 19:33:05.628473   75819 kubeadm.go:310] 
	I0403 19:33:05.628539   75819 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0403 19:33:05.628647   75819 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0403 19:33:05.628774   75819 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0403 19:33:05.628800   75819 kubeadm.go:310] 
	I0403 19:33:05.628905   75819 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0403 19:33:05.629014   75819 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0403 19:33:05.629022   75819 kubeadm.go:310] 
	I0403 19:33:05.629117   75819 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i2opuv.2m47nf28qphn3gfh \
	I0403 19:33:05.629239   75819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 \
	I0403 19:33:05.629267   75819 kubeadm.go:310] 	--control-plane 
	I0403 19:33:05.629275   75819 kubeadm.go:310] 
	I0403 19:33:05.629382   75819 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0403 19:33:05.629391   75819 kubeadm.go:310] 
	I0403 19:33:05.629494   75819 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i2opuv.2m47nf28qphn3gfh \
	I0403 19:33:05.629630   75819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 
	I0403 19:33:05.630329   75819 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:33:05.630359   75819 cni.go:84] Creating CNI manager for "flannel"
	I0403 19:33:05.631659   75819 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0403 19:33:02.451472   77599 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0403 19:33:02.451609   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:02.451664   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:02.466285   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
	I0403 19:33:02.466761   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:02.467372   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:02.467391   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:02.467816   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:02.468014   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:02.468179   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:02.468339   77599 start.go:159] libmachine.API.Create for "bridge-999005" (driver="kvm2")
	I0403 19:33:02.468372   77599 client.go:168] LocalClient.Create starting
	I0403 19:33:02.468415   77599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem
	I0403 19:33:02.468455   77599 main.go:141] libmachine: Decoding PEM data...
	I0403 19:33:02.468481   77599 main.go:141] libmachine: Parsing certificate...
	I0403 19:33:02.468554   77599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem
	I0403 19:33:02.468582   77599 main.go:141] libmachine: Decoding PEM data...
	I0403 19:33:02.468601   77599 main.go:141] libmachine: Parsing certificate...
	I0403 19:33:02.468620   77599 main.go:141] libmachine: Running pre-create checks...
	I0403 19:33:02.468639   77599 main.go:141] libmachine: (bridge-999005) Calling .PreCreateCheck
	I0403 19:33:02.468953   77599 main.go:141] libmachine: (bridge-999005) Calling .GetConfigRaw
	I0403 19:33:02.469335   77599 main.go:141] libmachine: Creating machine...
	I0403 19:33:02.469347   77599 main.go:141] libmachine: (bridge-999005) Calling .Create
	I0403 19:33:02.469470   77599 main.go:141] libmachine: (bridge-999005) creating KVM machine...
	I0403 19:33:02.469485   77599 main.go:141] libmachine: (bridge-999005) creating network...
	I0403 19:33:02.470738   77599 main.go:141] libmachine: (bridge-999005) DBG | found existing default KVM network
	I0403 19:33:02.472415   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:02.472249   77621 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123820}
	I0403 19:33:02.472448   77599 main.go:141] libmachine: (bridge-999005) DBG | created network xml: 
	I0403 19:33:02.472470   77599 main.go:141] libmachine: (bridge-999005) DBG | <network>
	I0403 19:33:02.472483   77599 main.go:141] libmachine: (bridge-999005) DBG |   <name>mk-bridge-999005</name>
	I0403 19:33:02.472494   77599 main.go:141] libmachine: (bridge-999005) DBG |   <dns enable='no'/>
	I0403 19:33:02.472504   77599 main.go:141] libmachine: (bridge-999005) DBG |   
	I0403 19:33:02.472515   77599 main.go:141] libmachine: (bridge-999005) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0403 19:33:02.472526   77599 main.go:141] libmachine: (bridge-999005) DBG |     <dhcp>
	I0403 19:33:02.472534   77599 main.go:141] libmachine: (bridge-999005) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0403 19:33:02.472550   77599 main.go:141] libmachine: (bridge-999005) DBG |     </dhcp>
	I0403 19:33:02.472564   77599 main.go:141] libmachine: (bridge-999005) DBG |   </ip>
	I0403 19:33:02.472577   77599 main.go:141] libmachine: (bridge-999005) DBG |   
	I0403 19:33:02.472586   77599 main.go:141] libmachine: (bridge-999005) DBG | </network>
	I0403 19:33:02.472596   77599 main.go:141] libmachine: (bridge-999005) DBG | 
	I0403 19:33:02.477381   77599 main.go:141] libmachine: (bridge-999005) DBG | trying to create private KVM network mk-bridge-999005 192.168.39.0/24...
	I0403 19:33:02.549445   77599 main.go:141] libmachine: (bridge-999005) setting up store path in /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005 ...
	I0403 19:33:02.549483   77599 main.go:141] libmachine: (bridge-999005) DBG | private KVM network mk-bridge-999005 192.168.39.0/24 created
	I0403 19:33:02.549497   77599 main.go:141] libmachine: (bridge-999005) building disk image from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0403 19:33:02.549523   77599 main.go:141] libmachine: (bridge-999005) Downloading /home/jenkins/minikube-integration/20591-14371/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0403 19:33:02.549542   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:02.549359   77621 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:33:02.808436   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:02.808274   77621 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa...
	I0403 19:33:03.010631   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:03.010517   77621 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/bridge-999005.rawdisk...
	I0403 19:33:03.010661   77599 main.go:141] libmachine: (bridge-999005) DBG | Writing magic tar header
	I0403 19:33:03.010671   77599 main.go:141] libmachine: (bridge-999005) DBG | Writing SSH key tar header
	I0403 19:33:03.010768   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:03.010673   77621 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005 ...
	I0403 19:33:03.010855   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005
	I0403 19:33:03.010883   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005 (perms=drwx------)
	I0403 19:33:03.010899   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines (perms=drwxr-xr-x)
	I0403 19:33:03.010913   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines
	I0403 19:33:03.010948   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:33:03.010961   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371
	I0403 19:33:03.010974   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0403 19:33:03.010993   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube (perms=drwxr-xr-x)
	I0403 19:33:03.011002   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins
	I0403 19:33:03.011012   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home
	I0403 19:33:03.011022   77599 main.go:141] libmachine: (bridge-999005) DBG | skipping /home - not owner
	I0403 19:33:03.011036   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371 (perms=drwxrwxr-x)
	I0403 19:33:03.011047   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0403 19:33:03.011061   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0403 19:33:03.011071   77599 main.go:141] libmachine: (bridge-999005) creating domain...
	I0403 19:33:03.012376   77599 main.go:141] libmachine: (bridge-999005) define libvirt domain using xml: 
	I0403 19:33:03.012401   77599 main.go:141] libmachine: (bridge-999005) <domain type='kvm'>
	I0403 19:33:03.012412   77599 main.go:141] libmachine: (bridge-999005)   <name>bridge-999005</name>
	I0403 19:33:03.012421   77599 main.go:141] libmachine: (bridge-999005)   <memory unit='MiB'>3072</memory>
	I0403 19:33:03.012429   77599 main.go:141] libmachine: (bridge-999005)   <vcpu>2</vcpu>
	I0403 19:33:03.012436   77599 main.go:141] libmachine: (bridge-999005)   <features>
	I0403 19:33:03.012444   77599 main.go:141] libmachine: (bridge-999005)     <acpi/>
	I0403 19:33:03.012452   77599 main.go:141] libmachine: (bridge-999005)     <apic/>
	I0403 19:33:03.012461   77599 main.go:141] libmachine: (bridge-999005)     <pae/>
	I0403 19:33:03.012468   77599 main.go:141] libmachine: (bridge-999005)     
	I0403 19:33:03.012473   77599 main.go:141] libmachine: (bridge-999005)   </features>
	I0403 19:33:03.012482   77599 main.go:141] libmachine: (bridge-999005)   <cpu mode='host-passthrough'>
	I0403 19:33:03.012508   77599 main.go:141] libmachine: (bridge-999005)   
	I0403 19:33:03.012524   77599 main.go:141] libmachine: (bridge-999005)   </cpu>
	I0403 19:33:03.012549   77599 main.go:141] libmachine: (bridge-999005)   <os>
	I0403 19:33:03.012572   77599 main.go:141] libmachine: (bridge-999005)     <type>hvm</type>
	I0403 19:33:03.012588   77599 main.go:141] libmachine: (bridge-999005)     <boot dev='cdrom'/>
	I0403 19:33:03.012606   77599 main.go:141] libmachine: (bridge-999005)     <boot dev='hd'/>
	I0403 19:33:03.012615   77599 main.go:141] libmachine: (bridge-999005)     <bootmenu enable='no'/>
	I0403 19:33:03.012622   77599 main.go:141] libmachine: (bridge-999005)   </os>
	I0403 19:33:03.012630   77599 main.go:141] libmachine: (bridge-999005)   <devices>
	I0403 19:33:03.012641   77599 main.go:141] libmachine: (bridge-999005)     <disk type='file' device='cdrom'>
	I0403 19:33:03.012653   77599 main.go:141] libmachine: (bridge-999005)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/boot2docker.iso'/>
	I0403 19:33:03.012670   77599 main.go:141] libmachine: (bridge-999005)       <target dev='hdc' bus='scsi'/>
	I0403 19:33:03.012679   77599 main.go:141] libmachine: (bridge-999005)       <readonly/>
	I0403 19:33:03.012701   77599 main.go:141] libmachine: (bridge-999005)     </disk>
	I0403 19:33:03.012714   77599 main.go:141] libmachine: (bridge-999005)     <disk type='file' device='disk'>
	I0403 19:33:03.012725   77599 main.go:141] libmachine: (bridge-999005)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0403 19:33:03.012745   77599 main.go:141] libmachine: (bridge-999005)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/bridge-999005.rawdisk'/>
	I0403 19:33:03.012755   77599 main.go:141] libmachine: (bridge-999005)       <target dev='hda' bus='virtio'/>
	I0403 19:33:03.012769   77599 main.go:141] libmachine: (bridge-999005)     </disk>
	I0403 19:33:03.012801   77599 main.go:141] libmachine: (bridge-999005)     <interface type='network'>
	I0403 19:33:03.012814   77599 main.go:141] libmachine: (bridge-999005)       <source network='mk-bridge-999005'/>
	I0403 19:33:03.012822   77599 main.go:141] libmachine: (bridge-999005)       <model type='virtio'/>
	I0403 19:33:03.012827   77599 main.go:141] libmachine: (bridge-999005)     </interface>
	I0403 19:33:03.012834   77599 main.go:141] libmachine: (bridge-999005)     <interface type='network'>
	I0403 19:33:03.012839   77599 main.go:141] libmachine: (bridge-999005)       <source network='default'/>
	I0403 19:33:03.012846   77599 main.go:141] libmachine: (bridge-999005)       <model type='virtio'/>
	I0403 19:33:03.012851   77599 main.go:141] libmachine: (bridge-999005)     </interface>
	I0403 19:33:03.012856   77599 main.go:141] libmachine: (bridge-999005)     <serial type='pty'>
	I0403 19:33:03.012863   77599 main.go:141] libmachine: (bridge-999005)       <target port='0'/>
	I0403 19:33:03.012888   77599 main.go:141] libmachine: (bridge-999005)     </serial>
	I0403 19:33:03.012900   77599 main.go:141] libmachine: (bridge-999005)     <console type='pty'>
	I0403 19:33:03.012911   77599 main.go:141] libmachine: (bridge-999005)       <target type='serial' port='0'/>
	I0403 19:33:03.012924   77599 main.go:141] libmachine: (bridge-999005)     </console>
	I0403 19:33:03.012929   77599 main.go:141] libmachine: (bridge-999005)     <rng model='virtio'>
	I0403 19:33:03.012935   77599 main.go:141] libmachine: (bridge-999005)       <backend model='random'>/dev/random</backend>
	I0403 19:33:03.012939   77599 main.go:141] libmachine: (bridge-999005)     </rng>
	I0403 19:33:03.012943   77599 main.go:141] libmachine: (bridge-999005)     
	I0403 19:33:03.012956   77599 main.go:141] libmachine: (bridge-999005)     
	I0403 19:33:03.012985   77599 main.go:141] libmachine: (bridge-999005)   </devices>
	I0403 19:33:03.013008   77599 main.go:141] libmachine: (bridge-999005) </domain>
	I0403 19:33:03.013039   77599 main.go:141] libmachine: (bridge-999005) 
	I0403 19:33:03.017100   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:76:dd:cb in network default
	I0403 19:33:03.017850   77599 main.go:141] libmachine: (bridge-999005) starting domain...
	I0403 19:33:03.017875   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:03.017883   77599 main.go:141] libmachine: (bridge-999005) ensuring networks are active...
	I0403 19:33:03.018620   77599 main.go:141] libmachine: (bridge-999005) Ensuring network default is active
	I0403 19:33:03.018960   77599 main.go:141] libmachine: (bridge-999005) Ensuring network mk-bridge-999005 is active
	I0403 19:33:03.019610   77599 main.go:141] libmachine: (bridge-999005) getting domain XML...
	I0403 19:33:03.020474   77599 main.go:141] libmachine: (bridge-999005) creating domain...
	I0403 19:33:04.308608   77599 main.go:141] libmachine: (bridge-999005) waiting for IP...
	I0403 19:33:04.309508   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:04.310076   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:04.310237   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:04.310154   77621 retry.go:31] will retry after 304.11605ms: waiting for domain to come up
	I0403 19:33:04.615460   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:04.616072   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:04.616105   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:04.616027   77621 retry.go:31] will retry after 352.836416ms: waiting for domain to come up
	I0403 19:33:04.970906   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:04.971506   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:04.971580   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:04.971492   77621 retry.go:31] will retry after 384.292797ms: waiting for domain to come up
	I0403 19:33:05.357155   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:05.357783   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:05.357804   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:05.357746   77621 retry.go:31] will retry after 593.108014ms: waiting for domain to come up
	I0403 19:33:05.953253   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:05.953908   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:05.953955   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:05.953851   77621 retry.go:31] will retry after 715.405514ms: waiting for domain to come up
	I0403 19:33:06.671416   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:06.671869   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:06.671893   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:06.671849   77621 retry.go:31] will retry after 643.974958ms: waiting for domain to come up
	I0403 19:33:07.317681   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:07.318083   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:07.318111   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:07.318044   77621 retry.go:31] will retry after 830.836827ms: waiting for domain to come up
	I0403 19:33:04.278957   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:06.279442   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:05.632586   75819 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0403 19:33:05.638039   75819 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0403 19:33:05.638061   75819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0403 19:33:05.665102   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0403 19:33:06.148083   75819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0403 19:33:06.148182   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:06.148222   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-999005 minikube.k8s.io/updated_at=2025_04_03T19_33_06_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053 minikube.k8s.io/name=flannel-999005 minikube.k8s.io/primary=true
	I0403 19:33:06.328677   75819 ops.go:34] apiserver oom_adj: -16
	I0403 19:33:06.328804   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:06.829687   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:07.329161   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:07.829420   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:08.328906   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:08.828872   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:09.328884   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:09.829539   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:09.973416   75819 kubeadm.go:1113] duration metric: took 3.825298406s to wait for elevateKubeSystemPrivileges
	I0403 19:33:09.973463   75819 kubeadm.go:394] duration metric: took 14.815036163s to StartCluster
	I0403 19:33:09.973485   75819 settings.go:142] acquiring lock: {Name:mk92384ef10350a2ea8b1710e0b74c72a0214398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:09.973557   75819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:33:09.974857   75819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/kubeconfig: {Name:mk73cf36c30cb8628f68d205e8cb0818a9975b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:09.975109   75819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0403 19:33:09.975113   75819 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:33:09.975194   75819 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0403 19:33:09.975291   75819 addons.go:69] Setting storage-provisioner=true in profile "flannel-999005"
	I0403 19:33:09.975313   75819 addons.go:238] Setting addon storage-provisioner=true in "flannel-999005"
	I0403 19:33:09.975344   75819 host.go:66] Checking if "flannel-999005" exists ...
	I0403 19:33:09.975339   75819 addons.go:69] Setting default-storageclass=true in profile "flannel-999005"
	I0403 19:33:09.975359   75819 config.go:182] Loaded profile config "flannel-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:09.975366   75819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-999005"
	I0403 19:33:09.975856   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:09.975875   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:09.975897   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:09.975907   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:09.976893   75819 out.go:177] * Verifying Kubernetes components...
	I0403 19:33:09.978410   75819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:09.995627   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44917
	I0403 19:33:09.995731   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46853
	I0403 19:33:09.996071   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:09.996180   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:09.996730   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:09.996748   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:09.996880   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:09.996905   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:09.997268   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:09.997310   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:09.997479   75819 main.go:141] libmachine: (flannel-999005) Calling .GetState
	I0403 19:33:09.997886   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:09.997934   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:10.001151   75819 addons.go:238] Setting addon default-storageclass=true in "flannel-999005"
	I0403 19:33:10.001199   75819 host.go:66] Checking if "flannel-999005" exists ...
	I0403 19:33:10.001557   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:10.001587   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:10.014104   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0403 19:33:10.014524   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:10.015098   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:10.015123   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:10.015470   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:10.015720   75819 main.go:141] libmachine: (flannel-999005) Calling .GetState
	I0403 19:33:10.017793   75819 main.go:141] libmachine: (flannel-999005) Calling .DriverName
	I0403 19:33:10.019942   75819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:33:10.021057   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0403 19:33:10.021158   75819 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:10.021177   75819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0403 19:33:10.021200   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHHostname
	I0403 19:33:10.021505   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:10.021986   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:10.022001   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:10.022291   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:10.022934   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:10.022978   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:10.024920   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.025474   75819 main.go:141] libmachine: (flannel-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:2c", ip: ""} in network mk-flannel-999005: {Iface:virbr4 ExpiryTime:2025-04-03 20:32:40 +0000 UTC Type:0 Mac:52:54:00:f9:eb:2c Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:flannel-999005 Clientid:01:52:54:00:f9:eb:2c}
	I0403 19:33:10.025494   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined IP address 192.168.72.34 and MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.025764   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHPort
	I0403 19:33:10.025935   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHKeyPath
	I0403 19:33:10.026060   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHUsername
	I0403 19:33:10.026152   75819 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/flannel-999005/id_rsa Username:docker}
	I0403 19:33:10.038389   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0403 19:33:10.038851   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:10.039336   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:10.039352   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:10.039758   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:10.039925   75819 main.go:141] libmachine: (flannel-999005) Calling .GetState
	I0403 19:33:10.041799   75819 main.go:141] libmachine: (flannel-999005) Calling .DriverName
	I0403 19:33:10.041979   75819 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:10.041991   75819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0403 19:33:10.042006   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHHostname
	I0403 19:33:10.045247   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.045722   75819 main.go:141] libmachine: (flannel-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:2c", ip: ""} in network mk-flannel-999005: {Iface:virbr4 ExpiryTime:2025-04-03 20:32:40 +0000 UTC Type:0 Mac:52:54:00:f9:eb:2c Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:flannel-999005 Clientid:01:52:54:00:f9:eb:2c}
	I0403 19:33:10.045806   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined IP address 192.168.72.34 and MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.046067   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHPort
	I0403 19:33:10.046226   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHKeyPath
	I0403 19:33:10.046320   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHUsername
	I0403 19:33:10.046486   75819 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/flannel-999005/id_rsa Username:docker}
	I0403 19:33:10.299013   75819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:10.317655   75819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:10.338018   75819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:33:10.338068   75819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0403 19:33:10.862171   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862198   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.862252   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862291   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.862394   75819 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0403 19:33:10.862529   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.862607   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.862646   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.862682   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.862685   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.862709   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862717   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.862733   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.862743   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862756   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.863008   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.863020   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.863023   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.863228   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.863249   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.863680   75819 node_ready.go:35] waiting up to 15m0s for node "flannel-999005" to be "Ready" ...
	I0403 19:33:10.884450   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.884469   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.884725   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.884743   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.884768   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.886301   75819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0403 19:33:08.779259   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:10.778746   73990 pod_ready.go:93] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.778767   73990 pod_ready.go:82] duration metric: took 38.005486758s for pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.778775   73990 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.781400   73990 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-nthv6" not found
	I0403 19:33:10.781418   73990 pod_ready.go:82] duration metric: took 2.637243ms for pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace to be "Ready" ...
	E0403 19:33:10.781427   73990 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-nthv6" not found
	I0403 19:33:10.781433   73990 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.785172   73990 pod_ready.go:93] pod "etcd-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.785192   73990 pod_ready.go:82] duration metric: took 3.752808ms for pod "etcd-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.785207   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.788834   73990 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.788850   73990 pod_ready.go:82] duration metric: took 3.634986ms for pod "kube-apiserver-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.788861   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.793809   73990 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.793831   73990 pod_ready.go:82] duration metric: took 4.96233ms for pod "kube-controller-manager-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.793843   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-mzxck" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.977165   73990 pod_ready.go:93] pod "kube-proxy-mzxck" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.977191   73990 pod_ready.go:82] duration metric: took 183.339442ms for pod "kube-proxy-mzxck" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.977209   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:11.377090   73990 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:11.377122   73990 pod_ready.go:82] duration metric: took 399.903527ms for pod "kube-scheduler-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:11.377135   73990 pod_ready.go:39] duration metric: took 38.606454546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:11.377156   73990 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:33:11.377225   73990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:33:11.399542   73990 api_server.go:72] duration metric: took 38.946574315s to wait for apiserver process to appear ...
	I0403 19:33:11.399566   73990 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:33:11.399582   73990 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I0403 19:33:11.405734   73990 api_server.go:279] https://192.168.50.55:8443/healthz returned 200:
	ok
	I0403 19:33:11.406888   73990 api_server.go:141] control plane version: v1.32.2
	I0403 19:33:11.406910   73990 api_server.go:131] duration metric: took 7.338515ms to wait for apiserver health ...
	I0403 19:33:11.406918   73990 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:33:11.582871   73990 system_pods.go:59] 7 kube-system pods found
	I0403 19:33:11.582912   73990 system_pods.go:61] "coredns-668d6bf9bc-2vwz9" [e83c5e99-c2f0-4228-bc84-d048bd7dba97] Running
	I0403 19:33:11.582920   73990 system_pods.go:61] "etcd-enable-default-cni-999005" [201225ab-9372-41eb-9c78-a52f125b0435] Running
	I0403 19:33:11.582927   73990 system_pods.go:61] "kube-apiserver-enable-default-cni-999005" [f3e9e4a1-810a-423a-8e08-35d311067324] Running
	I0403 19:33:11.582933   73990 system_pods.go:61] "kube-controller-manager-enable-default-cni-999005" [0b827b54-1569-4c8e-a582-ec0fd8e97cbc] Running
	I0403 19:33:11.582938   73990 system_pods.go:61] "kube-proxy-mzxck" [6c2874ed-9e8f-4222-87c3-fe23d207134c] Running
	I0403 19:33:11.582943   73990 system_pods.go:61] "kube-scheduler-enable-default-cni-999005" [e5d0c29c-06fc-4614-a107-51917236c60c] Running
	I0403 19:33:11.582949   73990 system_pods.go:61] "storage-provisioner" [6fab90c6-1563-4504-83d8-443f80cfb99c] Running
	I0403 19:33:11.582957   73990 system_pods.go:74] duration metric: took 176.033201ms to wait for pod list to return data ...
	I0403 19:33:11.582971   73990 default_sa.go:34] waiting for default service account to be created ...
	I0403 19:33:11.777789   73990 default_sa.go:45] found service account: "default"
	I0403 19:33:11.777811   73990 default_sa.go:55] duration metric: took 194.83101ms for default service account to be created ...
	I0403 19:33:11.777819   73990 system_pods.go:116] waiting for k8s-apps to be running ...
	I0403 19:33:11.977547   73990 system_pods.go:86] 7 kube-system pods found
	I0403 19:33:11.977583   73990 system_pods.go:89] "coredns-668d6bf9bc-2vwz9" [e83c5e99-c2f0-4228-bc84-d048bd7dba97] Running
	I0403 19:33:11.977592   73990 system_pods.go:89] "etcd-enable-default-cni-999005" [201225ab-9372-41eb-9c78-a52f125b0435] Running
	I0403 19:33:11.977599   73990 system_pods.go:89] "kube-apiserver-enable-default-cni-999005" [f3e9e4a1-810a-423a-8e08-35d311067324] Running
	I0403 19:33:11.977605   73990 system_pods.go:89] "kube-controller-manager-enable-default-cni-999005" [0b827b54-1569-4c8e-a582-ec0fd8e97cbc] Running
	I0403 19:33:11.977609   73990 system_pods.go:89] "kube-proxy-mzxck" [6c2874ed-9e8f-4222-87c3-fe23d207134c] Running
	I0403 19:33:11.977615   73990 system_pods.go:89] "kube-scheduler-enable-default-cni-999005" [e5d0c29c-06fc-4614-a107-51917236c60c] Running
	I0403 19:33:11.977620   73990 system_pods.go:89] "storage-provisioner" [6fab90c6-1563-4504-83d8-443f80cfb99c] Running
	I0403 19:33:11.977629   73990 system_pods.go:126] duration metric: took 199.803644ms to wait for k8s-apps to be running ...
	I0403 19:33:11.977643   73990 system_svc.go:44] waiting for kubelet service to be running ....
	I0403 19:33:11.977695   73990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:33:11.993125   73990 system_svc.go:56] duration metric: took 15.471997ms WaitForService to wait for kubelet
	I0403 19:33:11.993158   73990 kubeadm.go:582] duration metric: took 39.540195871s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:33:11.993188   73990 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:33:12.176775   73990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:33:12.176803   73990 node_conditions.go:123] node cpu capacity is 2
	I0403 19:33:12.176814   73990 node_conditions.go:105] duration metric: took 183.620688ms to run NodePressure ...
	I0403 19:33:12.176824   73990 start.go:241] waiting for startup goroutines ...
	I0403 19:33:12.176832   73990 start.go:246] waiting for cluster config update ...
	I0403 19:33:12.176840   73990 start.go:255] writing updated cluster config ...
	I0403 19:33:12.177113   73990 ssh_runner.go:195] Run: rm -f paused
	I0403 19:33:12.225807   73990 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0403 19:33:12.228521   73990 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-999005" cluster and "default" namespace by default
	I0403 19:33:08.150408   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:08.151003   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:08.151075   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:08.150981   77621 retry.go:31] will retry after 1.152427701s: waiting for domain to come up
	I0403 19:33:09.305349   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:09.305908   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:09.305936   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:09.305883   77621 retry.go:31] will retry after 1.688969841s: waiting for domain to come up
	I0403 19:33:10.996123   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:10.996600   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:10.996677   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:10.996605   77621 retry.go:31] will retry after 1.643659414s: waiting for domain to come up
	I0403 19:33:10.887137   75819 addons.go:514] duration metric: took 911.958897ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0403 19:33:11.366941   75819 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-999005" context rescaled to 1 replicas
	I0403 19:33:12.867785   75819 node_ready.go:53] node "flannel-999005" has status "Ready":"False"
	I0403 19:33:13.333186   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:33:13.333452   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:33:12.642410   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:12.642945   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:12.642979   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:12.642914   77621 retry.go:31] will retry after 2.077428265s: waiting for domain to come up
	I0403 19:33:14.722084   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:14.722568   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:14.722595   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:14.722556   77621 retry.go:31] will retry after 2.731919508s: waiting for domain to come up
	I0403 19:33:15.367030   75819 node_ready.go:53] node "flannel-999005" has status "Ready":"False"
	I0403 19:33:15.866309   75819 node_ready.go:49] node "flannel-999005" has status "Ready":"True"
	I0403 19:33:15.866339   75819 node_ready.go:38] duration metric: took 5.002629932s for node "flannel-999005" to be "Ready" ...
	I0403 19:33:15.866351   75819 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:15.878526   75819 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:17.884431   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:17.457578   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:17.458158   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:17.458186   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:17.458134   77621 retry.go:31] will retry after 2.937911428s: waiting for domain to come up
	I0403 19:33:20.397025   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:20.397485   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:20.397542   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:20.397476   77621 retry.go:31] will retry after 4.371309871s: waiting for domain to come up
	I0403 19:33:20.384008   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:22.384126   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:24.384580   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:24.771404   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.771836   77599 main.go:141] libmachine: (bridge-999005) found domain IP: 192.168.39.185
	I0403 19:33:24.771856   77599 main.go:141] libmachine: (bridge-999005) reserving static IP address...
	I0403 19:33:24.771868   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has current primary IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.772259   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find host DHCP lease matching {name: "bridge-999005", mac: "52:54:00:7a:d8:f7", ip: "192.168.39.185"} in network mk-bridge-999005
	I0403 19:33:24.855210   77599 main.go:141] libmachine: (bridge-999005) reserved static IP address 192.168.39.185 for domain bridge-999005
	I0403 19:33:24.855240   77599 main.go:141] libmachine: (bridge-999005) waiting for SSH...
	I0403 19:33:24.855250   77599 main.go:141] libmachine: (bridge-999005) DBG | Getting to WaitForSSH function...
	I0403 19:33:24.858175   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.858563   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:24.858592   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.858757   77599 main.go:141] libmachine: (bridge-999005) DBG | Using SSH client type: external
	I0403 19:33:24.858784   77599 main.go:141] libmachine: (bridge-999005) DBG | Using SSH private key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa (-rw-------)
	I0403 19:33:24.858847   77599 main.go:141] libmachine: (bridge-999005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0403 19:33:24.858868   77599 main.go:141] libmachine: (bridge-999005) DBG | About to run SSH command:
	I0403 19:33:24.858885   77599 main.go:141] libmachine: (bridge-999005) DBG | exit 0
	I0403 19:33:24.991462   77599 main.go:141] libmachine: (bridge-999005) DBG | SSH cmd err, output: <nil>: 
	I0403 19:33:24.991735   77599 main.go:141] libmachine: (bridge-999005) KVM machine creation complete
	I0403 19:33:24.992066   77599 main.go:141] libmachine: (bridge-999005) Calling .GetConfigRaw
	I0403 19:33:24.992629   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:24.992815   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:24.992938   77599 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0403 19:33:24.992952   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:24.994308   77599 main.go:141] libmachine: Detecting operating system of created instance...
	I0403 19:33:24.994326   77599 main.go:141] libmachine: Waiting for SSH to be available...
	I0403 19:33:24.994333   77599 main.go:141] libmachine: Getting to WaitForSSH function...
	I0403 19:33:24.994341   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:24.996876   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.997275   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:24.997304   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.997503   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:24.997680   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:24.997873   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:24.998025   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:24.998208   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:24.998408   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:24.998420   77599 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0403 19:33:25.106052   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:33:25.106078   77599 main.go:141] libmachine: Detecting the provisioner...
	I0403 19:33:25.106088   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.109437   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.109896   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.109925   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.110110   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.110294   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.110467   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.110624   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.110813   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.111134   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.111153   77599 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0403 19:33:25.216086   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0403 19:33:25.216142   77599 main.go:141] libmachine: found compatible host: buildroot
	I0403 19:33:25.216151   77599 main.go:141] libmachine: Provisioning with buildroot...
	I0403 19:33:25.216159   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:25.216374   77599 buildroot.go:166] provisioning hostname "bridge-999005"
	I0403 19:33:25.216401   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:25.216572   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.219422   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.219818   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.219856   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.219955   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.220119   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.220285   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.220404   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.220574   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.220845   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.220870   77599 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-999005 && echo "bridge-999005" | sudo tee /etc/hostname
	I0403 19:33:25.342189   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-999005
	
	I0403 19:33:25.342213   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.344813   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.345183   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.345211   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.345371   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.345582   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.345760   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.345918   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.346073   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.346281   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.346303   77599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-999005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-999005/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-999005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0403 19:33:25.458885   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:33:25.458914   77599 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20591-14371/.minikube CaCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20591-14371/.minikube}
	I0403 19:33:25.458936   77599 buildroot.go:174] setting up certificates
	I0403 19:33:25.458946   77599 provision.go:84] configureAuth start
	I0403 19:33:25.458954   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:25.459254   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:25.461901   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.462300   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.462326   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.462424   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.464888   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.465249   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.465284   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.465492   77599 provision.go:143] copyHostCerts
	I0403 19:33:25.465551   77599 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem, removing ...
	I0403 19:33:25.465580   77599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem
	I0403 19:33:25.465662   77599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem (1082 bytes)
	I0403 19:33:25.465795   77599 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem, removing ...
	I0403 19:33:25.465805   77599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem
	I0403 19:33:25.465835   77599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem (1123 bytes)
	I0403 19:33:25.465951   77599 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem, removing ...
	I0403 19:33:25.465960   77599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem
	I0403 19:33:25.465984   77599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem (1675 bytes)
	I0403 19:33:25.466044   77599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem org=jenkins.bridge-999005 san=[127.0.0.1 192.168.39.185 bridge-999005 localhost minikube]
	I0403 19:33:25.774649   77599 provision.go:177] copyRemoteCerts
	I0403 19:33:25.774710   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0403 19:33:25.774731   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.777197   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.777576   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.777599   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.777795   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.777962   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.778108   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.778212   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:25.860653   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0403 19:33:25.882849   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0403 19:33:25.904559   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0403 19:33:25.926431   77599 provision.go:87] duration metric: took 467.475481ms to configureAuth
	I0403 19:33:25.926455   77599 buildroot.go:189] setting minikube options for container-runtime
	I0403 19:33:25.926650   77599 config.go:182] Loaded profile config "bridge-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:25.926725   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.929371   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.929809   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.929838   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.930028   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.930213   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.930335   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.930463   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.930620   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.930837   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.930859   77599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0403 19:33:26.149645   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0403 19:33:26.149674   77599 main.go:141] libmachine: Checking connection to Docker...
	I0403 19:33:26.149683   77599 main.go:141] libmachine: (bridge-999005) Calling .GetURL
	I0403 19:33:26.151048   77599 main.go:141] libmachine: (bridge-999005) DBG | using libvirt version 6000000
	I0403 19:33:26.153703   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.154090   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.154119   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.154326   77599 main.go:141] libmachine: Docker is up and running!
	I0403 19:33:26.154341   77599 main.go:141] libmachine: Reticulating splines...
	I0403 19:33:26.154349   77599 client.go:171] duration metric: took 23.685966388s to LocalClient.Create
	I0403 19:33:26.154377   77599 start.go:167] duration metric: took 23.686038349s to libmachine.API.Create "bridge-999005"
	I0403 19:33:26.154389   77599 start.go:293] postStartSetup for "bridge-999005" (driver="kvm2")
	I0403 19:33:26.154402   77599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0403 19:33:26.154427   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.154672   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0403 19:33:26.154704   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.156992   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.157408   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.157429   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.157561   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.157730   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.157866   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.157997   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:26.241074   77599 ssh_runner.go:195] Run: cat /etc/os-release
	I0403 19:33:26.245234   77599 info.go:137] Remote host: Buildroot 2023.02.9
	I0403 19:33:26.245256   77599 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/addons for local assets ...
	I0403 19:33:26.245308   77599 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/files for local assets ...
	I0403 19:33:26.245384   77599 filesync.go:149] local asset: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem -> 215522.pem in /etc/ssl/certs
	I0403 19:33:26.245467   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0403 19:33:26.255926   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:33:26.280402   77599 start.go:296] duration metric: took 125.998084ms for postStartSetup
	I0403 19:33:26.280453   77599 main.go:141] libmachine: (bridge-999005) Calling .GetConfigRaw
	I0403 19:33:26.281006   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:26.283814   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.284161   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.284198   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.284452   77599 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/config.json ...
	I0403 19:33:26.284648   77599 start.go:128] duration metric: took 23.834461991s to createHost
	I0403 19:33:26.284669   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.286766   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.287110   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.287143   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.287319   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.287485   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.287642   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.287742   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.287917   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:26.288126   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:26.288141   77599 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0403 19:33:26.391168   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743708806.364931884
	
	I0403 19:33:26.391188   77599 fix.go:216] guest clock: 1743708806.364931884
	I0403 19:33:26.391194   77599 fix.go:229] Guest: 2025-04-03 19:33:26.364931884 +0000 UTC Remote: 2025-04-03 19:33:26.284659648 +0000 UTC m=+23.944823978 (delta=80.272236ms)
	I0403 19:33:26.391222   77599 fix.go:200] guest clock delta is within tolerance: 80.272236ms
	I0403 19:33:26.391226   77599 start.go:83] releasing machines lock for "bridge-999005", held for 23.941120784s
	I0403 19:33:26.391243   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.391495   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:26.393938   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.394286   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.394329   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.394501   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.394952   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.395143   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.395256   77599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0403 19:33:26.395299   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.395400   77599 ssh_runner.go:195] Run: cat /version.json
	I0403 19:33:26.395433   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.397923   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.398466   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.398524   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.398551   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.399177   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.399375   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.399399   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.399434   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.399582   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.399687   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.399711   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:26.399801   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.399953   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.400091   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:26.511483   77599 ssh_runner.go:195] Run: systemctl --version
	I0403 19:33:26.517463   77599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0403 19:33:26.670834   77599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0403 19:33:26.676690   77599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0403 19:33:26.676757   77599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0403 19:33:26.693357   77599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0403 19:33:26.693383   77599 start.go:495] detecting cgroup driver to use...
	I0403 19:33:26.693442   77599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0403 19:33:26.711536   77599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0403 19:33:26.727184   77599 docker.go:217] disabling cri-docker service (if available) ...
	I0403 19:33:26.727244   77599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0403 19:33:26.744189   77599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0403 19:33:26.758114   77599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0403 19:33:26.874699   77599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0403 19:33:27.029147   77599 docker.go:233] disabling docker service ...
	I0403 19:33:27.029214   77599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0403 19:33:27.042778   77599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0403 19:33:27.056884   77599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0403 19:33:27.165758   77599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0403 19:33:27.283993   77599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0403 19:33:27.297495   77599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0403 19:33:27.315338   77599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0403 19:33:27.315392   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.325005   77599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0403 19:33:27.325056   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.334776   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.345113   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.355007   77599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0403 19:33:27.364955   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.374894   77599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.391740   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.401813   77599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0403 19:33:27.411004   77599 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0403 19:33:27.411051   77599 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0403 19:33:27.423701   77599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0403 19:33:27.432566   77599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:27.549830   77599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0403 19:33:27.639431   77599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0403 19:33:27.639494   77599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0403 19:33:27.644011   77599 start.go:563] Will wait 60s for crictl version
	I0403 19:33:27.644059   77599 ssh_runner.go:195] Run: which crictl
	I0403 19:33:27.647488   77599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0403 19:33:27.684002   77599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0403 19:33:27.684079   77599 ssh_runner.go:195] Run: crio --version
	I0403 19:33:27.714223   77599 ssh_runner.go:195] Run: crio --version
	I0403 19:33:27.741585   77599 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0403 19:33:26.884187   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:28.885446   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:30.384628   75819 pod_ready.go:93] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.384654   75819 pod_ready.go:82] duration metric: took 14.506093364s for pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.384666   75819 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.391041   75819 pod_ready.go:93] pod "etcd-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.391069   75819 pod_ready.go:82] duration metric: took 6.395099ms for pod "etcd-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.391082   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.396442   75819 pod_ready.go:93] pod "kube-apiserver-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.396465   75819 pod_ready.go:82] duration metric: took 5.374496ms for pod "kube-apiserver-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.396475   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.403106   75819 pod_ready.go:93] pod "kube-controller-manager-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.403125   75819 pod_ready.go:82] duration metric: took 6.641201ms for pod "kube-controller-manager-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.403137   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5wp5x" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.407151   75819 pod_ready.go:93] pod "kube-proxy-5wp5x" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.407185   75819 pod_ready.go:82] duration metric: took 4.039313ms for pod "kube-proxy-5wp5x" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.407197   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.782264   75819 pod_ready.go:93] pod "kube-scheduler-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.782294   75819 pod_ready.go:82] duration metric: took 375.086145ms for pod "kube-scheduler-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.782309   75819 pod_ready.go:39] duration metric: took 14.915929273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:30.782329   75819 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:33:30.782393   75819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:33:30.798036   75819 api_server.go:72] duration metric: took 20.822884639s to wait for apiserver process to appear ...
	I0403 19:33:30.798067   75819 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:33:30.798089   75819 api_server.go:253] Checking apiserver healthz at https://192.168.72.34:8443/healthz ...
	I0403 19:33:30.803997   75819 api_server.go:279] https://192.168.72.34:8443/healthz returned 200:
	ok
	I0403 19:33:30.805211   75819 api_server.go:141] control plane version: v1.32.2
	I0403 19:33:30.805239   75819 api_server.go:131] duration metric: took 7.159207ms to wait for apiserver health ...
	I0403 19:33:30.805248   75819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:33:30.983942   75819 system_pods.go:59] 7 kube-system pods found
	I0403 19:33:30.984001   75819 system_pods.go:61] "coredns-668d6bf9bc-qxf6t" [c2f4058a-3dd8-4489-8fbc-05a2270375e4] Running
	I0403 19:33:30.984009   75819 system_pods.go:61] "etcd-flannel-999005" [67a1995c-eb31-4f43-85dc-abe52818818b] Running
	I0403 19:33:30.984015   75819 system_pods.go:61] "kube-apiserver-flannel-999005" [3b6f77fb-86b6-4f3a-91d7-ae7b58f084f8] Running
	I0403 19:33:30.984021   75819 system_pods.go:61] "kube-controller-manager-flannel-999005" [344cd255-fe98-41ef-818b-e79c931c72c3] Running
	I0403 19:33:30.984026   75819 system_pods.go:61] "kube-proxy-5wp5x" [e3f733e6-641a-4c29-94e7-a11cca7d4707] Running
	I0403 19:33:30.984035   75819 system_pods.go:61] "kube-scheduler-flannel-999005" [8a6014ba-ea10-4d6e-8e23-708cabaaeac9] Running
	I0403 19:33:30.984040   75819 system_pods.go:61] "storage-provisioner" [6785981d-1626-4f5a-ab63-000a23fcdce1] Running
	I0403 19:33:30.984048   75819 system_pods.go:74] duration metric: took 178.79249ms to wait for pod list to return data ...
	I0403 19:33:30.984056   75819 default_sa.go:34] waiting for default service account to be created ...
	I0403 19:33:31.182732   75819 default_sa.go:45] found service account: "default"
	I0403 19:33:31.182760   75819 default_sa.go:55] duration metric: took 198.696832ms for default service account to be created ...
	I0403 19:33:31.182774   75819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0403 19:33:31.385033   75819 system_pods.go:86] 7 kube-system pods found
	I0403 19:33:31.385057   75819 system_pods.go:89] "coredns-668d6bf9bc-qxf6t" [c2f4058a-3dd8-4489-8fbc-05a2270375e4] Running
	I0403 19:33:31.385062   75819 system_pods.go:89] "etcd-flannel-999005" [67a1995c-eb31-4f43-85dc-abe52818818b] Running
	I0403 19:33:31.385066   75819 system_pods.go:89] "kube-apiserver-flannel-999005" [3b6f77fb-86b6-4f3a-91d7-ae7b58f084f8] Running
	I0403 19:33:31.385069   75819 system_pods.go:89] "kube-controller-manager-flannel-999005" [344cd255-fe98-41ef-818b-e79c931c72c3] Running
	I0403 19:33:31.385073   75819 system_pods.go:89] "kube-proxy-5wp5x" [e3f733e6-641a-4c29-94e7-a11cca7d4707] Running
	I0403 19:33:31.385076   75819 system_pods.go:89] "kube-scheduler-flannel-999005" [8a6014ba-ea10-4d6e-8e23-708cabaaeac9] Running
	I0403 19:33:31.385079   75819 system_pods.go:89] "storage-provisioner" [6785981d-1626-4f5a-ab63-000a23fcdce1] Running
	I0403 19:33:31.385085   75819 system_pods.go:126] duration metric: took 202.306181ms to wait for k8s-apps to be running ...
	I0403 19:33:31.385091   75819 system_svc.go:44] waiting for kubelet service to be running ....
	I0403 19:33:31.385126   75819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:33:31.404702   75819 system_svc.go:56] duration metric: took 19.600688ms WaitForService to wait for kubelet
	I0403 19:33:31.404730   75819 kubeadm.go:582] duration metric: took 21.4295849s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:33:31.404750   75819 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:33:31.582762   75819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:33:31.582801   75819 node_conditions.go:123] node cpu capacity is 2
	I0403 19:33:31.582836   75819 node_conditions.go:105] duration metric: took 178.062088ms to run NodePressure ...
	I0403 19:33:31.582854   75819 start.go:241] waiting for startup goroutines ...
	I0403 19:33:31.582869   75819 start.go:246] waiting for cluster config update ...
	I0403 19:33:31.582887   75819 start.go:255] writing updated cluster config ...
	I0403 19:33:31.583197   75819 ssh_runner.go:195] Run: rm -f paused
	I0403 19:33:31.635619   75819 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0403 19:33:31.638459   75819 out.go:177] * Done! kubectl is now configured to use "flannel-999005" cluster and "default" namespace by default
	I0403 19:33:27.742812   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:27.745608   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:27.745919   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:27.745942   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:27.746168   77599 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0403 19:33:27.751053   77599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:33:27.764022   77599 kubeadm.go:883] updating cluster {Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0403 19:33:27.764144   77599 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 19:33:27.764216   77599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:33:27.796330   77599 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0403 19:33:27.796388   77599 ssh_runner.go:195] Run: which lz4
	I0403 19:33:27.800001   77599 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0403 19:33:27.803844   77599 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0403 19:33:27.803872   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0403 19:33:29.013823   77599 crio.go:462] duration metric: took 1.21384319s to copy over tarball
	I0403 19:33:29.013908   77599 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0403 19:33:31.265429   77599 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25149294s)
	I0403 19:33:31.265456   77599 crio.go:469] duration metric: took 2.251598795s to extract the tarball
	I0403 19:33:31.265466   77599 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0403 19:33:31.311717   77599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:33:31.357972   77599 crio.go:514] all images are preloaded for cri-o runtime.
	I0403 19:33:31.357990   77599 cache_images.go:84] Images are preloaded, skipping loading
	I0403 19:33:31.357996   77599 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.32.2 crio true true} ...
	I0403 19:33:31.358074   77599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-999005 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0403 19:33:31.358151   77599 ssh_runner.go:195] Run: crio config
	I0403 19:33:31.405178   77599 cni.go:84] Creating CNI manager for "bridge"
	I0403 19:33:31.405201   77599 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0403 19:33:31.405225   77599 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-999005 NodeName:bridge-999005 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0403 19:33:31.405365   77599 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-999005"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0403 19:33:31.405440   77599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0403 19:33:31.414987   77599 binaries.go:44] Found k8s binaries, skipping transfer
	I0403 19:33:31.415051   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0403 19:33:31.423910   77599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0403 19:33:31.440728   77599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0403 19:33:31.457926   77599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0403 19:33:31.473099   77599 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0403 19:33:31.476839   77599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:33:31.489178   77599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:31.648751   77599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:33:31.669990   77599 certs.go:68] Setting up /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005 for IP: 192.168.39.185
	I0403 19:33:31.670005   77599 certs.go:194] generating shared ca certs ...
	I0403 19:33:31.670019   77599 certs.go:226] acquiring lock for ca certs: {Name:mk34f09724355b828d31eb1ee97771fd3b4645c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:31.670173   77599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key
	I0403 19:33:31.670222   77599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key
	I0403 19:33:31.670233   77599 certs.go:256] generating profile certs ...
	I0403 19:33:31.670294   77599 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.key
	I0403 19:33:31.670311   77599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt with IP's: []
	I0403 19:33:31.786831   77599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt ...
	I0403 19:33:31.786859   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: {Name:mkf649d0c8846125bd9d91dd0614dd3edfd43b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:31.787055   77599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.key ...
	I0403 19:33:31.787070   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.key: {Name:mkea47be4f98d7242ecb2031208f90bf3ddcfbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:31.787180   77599 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7
	I0403 19:33:31.787196   77599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185]
	I0403 19:33:32.247425   77599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7 ...
	I0403 19:33:32.247474   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7: {Name:mkb6bfa4c7f67a4ee70ff58016a1c305b43c986d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.247650   77599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7 ...
	I0403 19:33:32.247672   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7: {Name:mk32e06deb5b5d3858815a6cc3fd3d129517ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.247754   77599 certs.go:381] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7 -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt
	I0403 19:33:32.247827   77599 certs.go:385] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7 -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key
	I0403 19:33:32.247877   77599 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key
	I0403 19:33:32.247891   77599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt with IP's: []
	I0403 19:33:32.541993   77599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt ...
	I0403 19:33:32.542032   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt: {Name:mka4e60c00e3edab5ba1c58c999a89035bcada4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.542254   77599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key ...
	I0403 19:33:32.542274   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key: {Name:mkde5f934453d4d4ad6f3ee32b9cd909c8295965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.542504   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem (1338 bytes)
	W0403 19:33:32.542553   77599 certs.go:480] ignoring /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552_empty.pem, impossibly tiny 0 bytes
	I0403 19:33:32.542568   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem (1675 bytes)
	I0403 19:33:32.542598   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem (1082 bytes)
	I0403 19:33:32.542631   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem (1123 bytes)
	I0403 19:33:32.542662   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem (1675 bytes)
	I0403 19:33:32.542713   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:33:32.543437   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0403 19:33:32.573758   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0403 19:33:32.607840   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0403 19:33:32.640302   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0403 19:33:32.664859   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0403 19:33:32.688081   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0403 19:33:32.713262   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0403 19:33:32.738235   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0403 19:33:32.760858   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /usr/share/ca-certificates/215522.pem (1708 bytes)
	I0403 19:33:32.785677   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0403 19:33:32.812357   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem --> /usr/share/ca-certificates/21552.pem (1338 bytes)
	I0403 19:33:32.837494   77599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0403 19:33:32.855867   77599 ssh_runner.go:195] Run: openssl version
	I0403 19:33:32.861693   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215522.pem && ln -fs /usr/share/ca-certificates/215522.pem /etc/ssl/certs/215522.pem"
	I0403 19:33:32.873958   77599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215522.pem
	I0403 19:33:32.878670   77599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  3 18:20 /usr/share/ca-certificates/215522.pem
	I0403 19:33:32.878720   77599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215522.pem
	I0403 19:33:32.884412   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215522.pem /etc/ssl/certs/3ec20f2e.0"
	I0403 19:33:32.895046   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0403 19:33:32.907127   77599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:33:32.911596   77599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  3 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:33:32.911653   77599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:33:32.917387   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0403 19:33:32.929021   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21552.pem && ln -fs /usr/share/ca-certificates/21552.pem /etc/ssl/certs/21552.pem"
	I0403 19:33:32.939538   77599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21552.pem
	I0403 19:33:32.943923   77599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  3 18:20 /usr/share/ca-certificates/21552.pem
	I0403 19:33:32.944004   77599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21552.pem
	I0403 19:33:32.949423   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21552.pem /etc/ssl/certs/51391683.0"
	I0403 19:33:32.960722   77599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0403 19:33:32.965345   77599 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0403 19:33:32.965401   77599 kubeadm.go:392] StartCluster: {Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:33:32.965483   77599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0403 19:33:32.965542   77599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0403 19:33:33.006784   77599 cri.go:89] found id: ""
	I0403 19:33:33.006867   77599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0403 19:33:33.020183   77599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0403 19:33:33.032692   77599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:33:33.044354   77599 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 19:33:33.044374   77599 kubeadm.go:157] found existing configuration files:
	
	I0403 19:33:33.044424   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:33:33.054955   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 19:33:33.055012   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 19:33:33.065535   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:33:33.075309   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 19:33:33.075362   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 19:33:33.084429   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:33:33.094442   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 19:33:33.094494   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:33:33.104926   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:33:33.113846   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 19:33:33.113901   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:33:33.123447   77599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0403 19:33:33.175768   77599 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0403 19:33:33.175858   77599 kubeadm.go:310] [preflight] Running pre-flight checks
	I0403 19:33:33.283828   77599 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0403 19:33:33.283918   77599 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0403 19:33:33.284054   77599 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0403 19:33:33.292775   77599 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0403 19:33:33.394356   77599 out.go:235]   - Generating certificates and keys ...
	I0403 19:33:33.394483   77599 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0403 19:33:33.394561   77599 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0403 19:33:33.485736   77599 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0403 19:33:33.658670   77599 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0403 19:33:33.890328   77599 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0403 19:33:34.033068   77599 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0403 19:33:34.206188   77599 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0403 19:33:34.206439   77599 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-999005 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0403 19:33:34.284743   77599 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0403 19:33:34.285173   77599 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-999005 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0403 19:33:34.392026   77599 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0403 19:33:34.810433   77599 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0403 19:33:35.031395   77599 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0403 19:33:35.031595   77599 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0403 19:33:35.090736   77599 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0403 19:33:35.311577   77599 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0403 19:33:35.707554   77599 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0403 19:33:35.820376   77599 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0403 19:33:35.956268   77599 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0403 19:33:35.956874   77599 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0403 19:33:35.959282   77599 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0403 19:33:35.961148   77599 out.go:235]   - Booting up control plane ...
	I0403 19:33:35.961289   77599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0403 19:33:35.961399   77599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0403 19:33:35.961510   77599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0403 19:33:35.976979   77599 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0403 19:33:35.984810   77599 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0403 19:33:35.984907   77599 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0403 19:33:36.127595   77599 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0403 19:33:36.127753   77599 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0403 19:33:37.628536   77599 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502119988s
	I0403 19:33:37.628648   77599 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0403 19:33:42.629743   77599 kubeadm.go:310] [api-check] The API server is healthy after 5.001769611s
	I0403 19:33:42.644211   77599 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0403 19:33:42.657726   77599 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0403 19:33:42.676447   77599 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0403 19:33:42.676702   77599 kubeadm.go:310] [mark-control-plane] Marking the node bridge-999005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0403 19:33:42.687306   77599 kubeadm.go:310] [bootstrap-token] Using token: fq7src.0us7ohixvgrd79kz
	I0403 19:33:42.688455   77599 out.go:235]   - Configuring RBAC rules ...
	I0403 19:33:42.688598   77599 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0403 19:33:42.699921   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0403 19:33:42.705060   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0403 19:33:42.708286   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0403 19:33:42.711842   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0403 19:33:42.714732   77599 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0403 19:33:43.034566   77599 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0403 19:33:43.461914   77599 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0403 19:33:44.038634   77599 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0403 19:33:44.038659   77599 kubeadm.go:310] 
	I0403 19:33:44.038745   77599 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0403 19:33:44.038755   77599 kubeadm.go:310] 
	I0403 19:33:44.038871   77599 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0403 19:33:44.038881   77599 kubeadm.go:310] 
	I0403 19:33:44.038916   77599 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0403 19:33:44.039008   77599 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0403 19:33:44.039100   77599 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0403 19:33:44.039134   77599 kubeadm.go:310] 
	I0403 19:33:44.039222   77599 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0403 19:33:44.039235   77599 kubeadm.go:310] 
	I0403 19:33:44.039297   77599 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0403 19:33:44.039307   77599 kubeadm.go:310] 
	I0403 19:33:44.039378   77599 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0403 19:33:44.039475   77599 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0403 19:33:44.039566   77599 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0403 19:33:44.039577   77599 kubeadm.go:310] 
	I0403 19:33:44.039690   77599 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0403 19:33:44.039800   77599 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0403 19:33:44.039812   77599 kubeadm.go:310] 
	I0403 19:33:44.039932   77599 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fq7src.0us7ohixvgrd79kz \
	I0403 19:33:44.040071   77599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 \
	I0403 19:33:44.040122   77599 kubeadm.go:310] 	--control-plane 
	I0403 19:33:44.040136   77599 kubeadm.go:310] 
	I0403 19:33:44.040260   77599 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0403 19:33:44.040279   77599 kubeadm.go:310] 
	I0403 19:33:44.040382   77599 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fq7src.0us7ohixvgrd79kz \
	I0403 19:33:44.040526   77599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 
	I0403 19:33:44.042310   77599 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:33:44.042339   77599 cni.go:84] Creating CNI manager for "bridge"
	I0403 19:33:44.044752   77599 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0403 19:33:44.046058   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0403 19:33:44.056620   77599 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0403 19:33:44.072775   77599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0403 19:33:44.072865   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:44.072907   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-999005 minikube.k8s.io/updated_at=2025_04_03T19_33_44_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053 minikube.k8s.io/name=bridge-999005 minikube.k8s.io/primary=true
	I0403 19:33:44.091241   77599 ops.go:34] apiserver oom_adj: -16
	I0403 19:33:44.213492   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:44.713802   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:45.214487   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:45.714490   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:46.213775   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:46.714137   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:47.214234   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:47.714484   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:48.214082   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:48.316673   77599 kubeadm.go:1113] duration metric: took 4.243867048s to wait for elevateKubeSystemPrivileges
	I0403 19:33:48.316706   77599 kubeadm.go:394] duration metric: took 15.351310395s to StartCluster
	I0403 19:33:48.316727   77599 settings.go:142] acquiring lock: {Name:mk92384ef10350a2ea8b1710e0b74c72a0214398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:48.316801   77599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:33:48.317861   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/kubeconfig: {Name:mk73cf36c30cb8628f68d205e8cb0818a9975b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:48.318088   77599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0403 19:33:48.318097   77599 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:33:48.318175   77599 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0403 19:33:48.318244   77599 addons.go:69] Setting storage-provisioner=true in profile "bridge-999005"
	I0403 19:33:48.318265   77599 addons.go:238] Setting addon storage-provisioner=true in "bridge-999005"
	I0403 19:33:48.318297   77599 host.go:66] Checking if "bridge-999005" exists ...
	I0403 19:33:48.318313   77599 addons.go:69] Setting default-storageclass=true in profile "bridge-999005"
	I0403 19:33:48.318298   77599 config.go:182] Loaded profile config "bridge-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:48.318356   77599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-999005"
	I0403 19:33:48.318770   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.318796   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.318776   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.318879   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.319539   77599 out.go:177] * Verifying Kubernetes components...
	I0403 19:33:48.321103   77599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:48.336019   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0403 19:33:48.336019   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0403 19:33:48.336447   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.336540   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.336979   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.336996   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.337098   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.337121   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.337332   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.337465   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.337538   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:48.338013   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.338065   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.340961   77599 addons.go:238] Setting addon default-storageclass=true in "bridge-999005"
	I0403 19:33:48.340999   77599 host.go:66] Checking if "bridge-999005" exists ...
	I0403 19:33:48.341322   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.341365   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.355048   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39811
	I0403 19:33:48.355610   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.356196   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.356226   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.356592   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.356792   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:48.356827   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0403 19:33:48.357305   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.357816   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.357835   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.358248   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.358722   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:48.358870   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.358911   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.360538   77599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:33:48.361702   77599 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:48.361718   77599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0403 19:33:48.361733   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:48.365062   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.365531   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:48.365554   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.365701   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:48.365870   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:48.366032   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:48.366166   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:48.374675   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I0403 19:33:48.375202   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.375806   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.375835   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.376141   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.376322   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:48.378097   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:48.378291   77599 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:48.378302   77599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0403 19:33:48.378314   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:48.381118   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.381622   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:48.381645   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.381846   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:48.382025   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:48.382166   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:48.382292   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:48.586906   77599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:33:48.586933   77599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0403 19:33:48.720936   77599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:48.723342   77599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:49.076492   77599 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0403 19:33:49.076540   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.076560   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.076816   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.076831   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.076840   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.076848   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.077211   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.077226   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.077254   77599 main.go:141] libmachine: (bridge-999005) DBG | Closing plugin on server side
	I0403 19:33:49.077567   77599 node_ready.go:35] waiting up to 15m0s for node "bridge-999005" to be "Ready" ...
	I0403 19:33:49.095818   77599 node_ready.go:49] node "bridge-999005" has status "Ready":"True"
	I0403 19:33:49.095840   77599 node_ready.go:38] duration metric: took 18.234764ms for node "bridge-999005" to be "Ready" ...
	I0403 19:33:49.095851   77599 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:49.103291   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.103309   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.103560   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.103582   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.103585   77599 main.go:141] libmachine: (bridge-999005) DBG | Closing plugin on server side
	I0403 19:33:49.106640   77599 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:49.381709   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.381734   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.382012   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.382029   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.382037   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.382044   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.382304   77599 main.go:141] libmachine: (bridge-999005) DBG | Closing plugin on server side
	I0403 19:33:49.382308   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.382332   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.383772   77599 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0403 19:33:49.384901   77599 addons.go:514] duration metric: took 1.066742014s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0403 19:33:49.580077   77599 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-999005" context rescaled to 1 replicas
	I0403 19:33:51.111757   77599 pod_ready.go:103] pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:52.112437   77599 pod_ready.go:93] pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:52.112460   77599 pod_ready.go:82] duration metric: took 3.005799611s for pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:52.112469   77599 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:52.114218   77599 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-s979x" not found
	I0403 19:33:52.114244   77599 pod_ready.go:82] duration metric: took 1.768553ms for pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace to be "Ready" ...
	E0403 19:33:52.114257   77599 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-s979x" not found
	I0403 19:33:52.114267   77599 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:53.332014   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:33:53.332308   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:33:53.332328   66718 kubeadm.go:310] 
	I0403 19:33:53.332364   66718 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0403 19:33:53.332399   66718 kubeadm.go:310] 		timed out waiting for the condition
	I0403 19:33:53.332406   66718 kubeadm.go:310] 
	I0403 19:33:53.332435   66718 kubeadm.go:310] 	This error is likely caused by:
	I0403 19:33:53.332465   66718 kubeadm.go:310] 		- The kubelet is not running
	I0403 19:33:53.332560   66718 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0403 19:33:53.332566   66718 kubeadm.go:310] 
	I0403 19:33:53.332655   66718 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0403 19:33:53.332718   66718 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0403 19:33:53.332781   66718 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0403 19:33:53.332790   66718 kubeadm.go:310] 
	I0403 19:33:53.332922   66718 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0403 19:33:53.333025   66718 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0403 19:33:53.333033   66718 kubeadm.go:310] 
	I0403 19:33:53.333168   66718 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0403 19:33:53.333296   66718 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0403 19:33:53.333410   66718 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0403 19:33:53.333518   66718 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0403 19:33:53.333528   66718 kubeadm.go:310] 
	I0403 19:33:53.334367   66718 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:33:53.334492   66718 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0403 19:33:53.334554   66718 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0403 19:33:53.334604   66718 kubeadm.go:394] duration metric: took 7m59.310981648s to StartCluster
	I0403 19:33:53.334636   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:33:53.334685   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:33:53.373643   66718 cri.go:89] found id: ""
	I0403 19:33:53.373669   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.373682   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:33:53.373689   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:33:53.373736   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:33:53.403561   66718 cri.go:89] found id: ""
	I0403 19:33:53.403587   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.403595   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:33:53.403600   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:33:53.403655   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:33:53.433381   66718 cri.go:89] found id: ""
	I0403 19:33:53.433411   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.433420   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:33:53.433427   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:33:53.433480   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:33:53.464729   66718 cri.go:89] found id: ""
	I0403 19:33:53.464758   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.464769   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:33:53.464775   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:33:53.464843   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:33:53.495666   66718 cri.go:89] found id: ""
	I0403 19:33:53.495697   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.495708   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:33:53.495715   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:33:53.495782   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:33:53.527704   66718 cri.go:89] found id: ""
	I0403 19:33:53.527730   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.527739   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:33:53.527747   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:33:53.527804   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:33:53.567852   66718 cri.go:89] found id: ""
	I0403 19:33:53.567874   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.567881   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:33:53.567887   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:33:53.567943   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:33:53.597334   66718 cri.go:89] found id: ""
	I0403 19:33:53.597363   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.597374   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:33:53.597386   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:33:53.597399   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:33:53.653211   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:33:53.653246   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:33:53.666175   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:33:53.666201   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:33:53.736375   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:33:53.736397   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:33:53.736409   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:33:53.837412   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:33:53.837449   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0403 19:33:53.876433   66718 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0403 19:33:53.876481   66718 out.go:270] * 
	W0403 19:33:53.876533   66718 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:33:53.876547   66718 out.go:270] * 
	W0403 19:33:53.877616   66718 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0403 19:33:53.880186   66718 out.go:201] 
	W0403 19:33:53.881256   66718 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:33:53.881290   66718 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0403 19:33:53.881311   66718 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0403 19:33:53.882318   66718 out.go:201] 
	
	
	==> CRI-O <==
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.791541829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743708834791525101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a42d814-2c67-4113-bee9-75413ec44ad4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.791971988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=776ade8a-6111-4edc-9e66-a856318350c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.792016321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=776ade8a-6111-4edc-9e66-a856318350c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.792044018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=776ade8a-6111-4edc-9e66-a856318350c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.822481079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0837b34f-96b2-451c-ba37-207b30b69f63 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.822581485Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0837b34f-96b2-451c-ba37-207b30b69f63 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.823652878Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5144c7b2-7060-4c28-8f7d-e09c0ee48839 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.824246663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743708834824217281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5144c7b2-7060-4c28-8f7d-e09c0ee48839 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.825018587Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9166b3e-5fda-481e-8354-50c54a2958cf name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.825078857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9166b3e-5fda-481e-8354-50c54a2958cf name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.825161346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c9166b3e-5fda-481e-8354-50c54a2958cf name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.873998486Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe24de1b-474c-45da-995e-52b1d4b1421d name=/runtime.v1.RuntimeService/Version
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.874118409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe24de1b-474c-45da-995e-52b1d4b1421d name=/runtime.v1.RuntimeService/Version
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.875460445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1d1b8a8-707f-4838-a052-06ef5c788d06 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.876034146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743708834876005565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1d1b8a8-707f-4838-a052-06ef5c788d06 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.876888392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=119bf2f4-9e67-4ebe-ac95-9e69fd054aca name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.876952130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=119bf2f4-9e67-4ebe-ac95-9e69fd054aca name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.876985563Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=119bf2f4-9e67-4ebe-ac95-9e69fd054aca name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.924150403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8c9600d-e7d8-427f-b81e-66f184dfab68 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.924233813Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8c9600d-e7d8-427f-b81e-66f184dfab68 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.925135485Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb402b43-8537-4659-a6de-2dd6b4e401b6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.925515077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743708834925494458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb402b43-8537-4659-a6de-2dd6b4e401b6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.926067907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5faa36dc-de75-421f-93d6-fed1170cb257 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.926126646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5faa36dc-de75-421f-93d6-fed1170cb257 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:33:54 old-k8s-version-471019 crio[636]: time="2025-04-03 19:33:54.926157076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5faa36dc-de75-421f-93d6-fed1170cb257 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 3 19:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052726] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041853] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.065841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.955511] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.571384] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.620728] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.063202] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054417] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.185024] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.123908] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.218372] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +6.279584] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.069499] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.643502] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[Apr 3 19:26] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 3 19:30] systemd-fstab-generator[5045]: Ignoring "noauto" option for root device
	[Apr 3 19:31] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.102429] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:33:55 up 8 min,  0 users,  load average: 0.00, 0.07, 0.04
	Linux old-k8s-version-471019 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000c1a870)
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]: goroutine 161 [select]:
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00066def0, 0x4f0ac20, 0xc000bf20a0, 0x1, 0xc0001020c0)
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000bc4000, 0xc0001020c0)
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bfc200, 0xc000c2e380)
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5507]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 03 19:33:54 old-k8s-version-471019 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 03 19:33:54 old-k8s-version-471019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 03 19:33:54 old-k8s-version-471019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 03 19:33:54 old-k8s-version-471019 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 03 19:33:54 old-k8s-version-471019 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5615]: I0403 19:33:54.961543    5615 server.go:416] Version: v1.20.0
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5615]: I0403 19:33:54.963428    5615 server.go:837] Client rotation is on, will bootstrap in background
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5615]: I0403 19:33:54.969510    5615 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5615]: I0403 19:33:54.971341    5615 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 03 19:33:54 old-k8s-version-471019 kubelet[5615]: W0403 19:33:54.971416    5615 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-471019 -n old-k8s-version-471019
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 2 (231.646555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-471019" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (509.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:34:09.256076   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:34:15.629429   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/default-k8s-diff-port-875122/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:34:18.146323   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/no-preload-409395/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:34:34.401241   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:35:32.328000   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:35:37.550957   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/default-k8s-diff-port-875122/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:35:54.740334   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:35:54.746685   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:35:54.758036   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:35:54.779361   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:35:54.820702   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:35:54.902247   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:35:55.063741   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:35:55.385408   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:35:56.027022   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:35:57.308308   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:35:59.870510   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:04.511789   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:04.518142   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:04.529455   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:04.550833   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:04.592254   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:04.673672   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:04.835323   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:04.991956   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:05.157526   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:05.799156   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:07.080683   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:09.642165   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:14.764367   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:15.233330   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:25.006113   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:34.285268   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/no-preload-409395/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:35.715422   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:45.488386   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:47.075090   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:47.081430   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:47.092769   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:47.114123   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:47.155476   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:47.236969   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:47.398496   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:47.719785   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:36:48.361819   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:49.643550   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:52.205397   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:36:57.326788   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:01.988641   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/no-preload-409395/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:07.569110   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:16.677314   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:26.450564   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:28.050465   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:34.848784   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:37:34.855106   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:37:34.866462   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:37:34.887838   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:37:34.929241   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:37:35.010680   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:37:35.172159   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:35.493601   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:37:36.135714   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:37.417160   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:39.978711   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:45.100097   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:53.688384   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/default-k8s-diff-port-875122/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:37:55.342289   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:09.012567   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:12.666273   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:12.672621   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:12.683926   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:12.705296   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:12.746640   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:12.828065   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:12.989561   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:13.311266   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:13.953193   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:15.235291   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:15.823936   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:17.797636   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:21.393163   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/default-k8s-diff-port-875122/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:22.919676   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:31.662963   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:31.669286   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:31.680592   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:31.701972   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:31.743329   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:31.824758   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:31.986288   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:32.308007   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:32.950046   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:33.161605   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:34.232038   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:36.794419   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:38.599051   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:41.916110   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:48.372591   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:52.158418   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:53.642895   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:56.785547   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:59.397272   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:59.403633   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:59.414935   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:59.436261   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:38:59.478192   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:59.559654   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:38:59.721157   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:39:00.042847   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:00.684582   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:01.965981   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:04.527989   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:09.255555   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:09.650251   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:12.639692   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:19.892152   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:30.934838   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:34.401336   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:34.604874   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:40.373940   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:39:53.601069   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:40:18.706913   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:40:21.336112   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:40:54.740361   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:40:56.526209   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:41:04.512081   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:41:15.522936   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:41:22.440600   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:41:32.214610   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:41:34.284807   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/no-preload-409395/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:41:43.258181   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:41:47.075023   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:42:14.776744   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:42:34.848793   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:42:53.688642   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/default-k8s-diff-port-875122/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-471019 -n old-k8s-version-471019
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 2 (215.893784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-471019" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 2 (209.435359ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-471019 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-999005 sudo iptables                       | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo docker                         | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo find                           | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo crio                           | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-999005                                     | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 19:33:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 19:33:02.376869   77599 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:33:02.377092   77599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:33:02.377103   77599 out.go:358] Setting ErrFile to fd 2...
	I0403 19:33:02.377107   77599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:33:02.377328   77599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:33:02.378024   77599 out.go:352] Setting JSON to false
	I0403 19:33:02.379161   77599 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8127,"bootTime":1743700655,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:33:02.379239   77599 start.go:139] virtualization: kvm guest
	I0403 19:33:02.380689   77599 out.go:177] * [bridge-999005] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:33:02.382009   77599 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:33:02.382020   77599 notify.go:220] Checking for updates...
	I0403 19:33:02.384007   77599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:33:02.385169   77599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:33:02.386247   77599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:33:02.387253   77599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:33:02.388401   77599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:33:02.389846   77599 config.go:182] Loaded profile config "enable-default-cni-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:02.389947   77599 config.go:182] Loaded profile config "flannel-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:02.390028   77599 config.go:182] Loaded profile config "old-k8s-version-471019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0403 19:33:02.390112   77599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:33:02.427821   77599 out.go:177] * Using the kvm2 driver based on user configuration
	I0403 19:33:02.428964   77599 start.go:297] selected driver: kvm2
	I0403 19:33:02.428982   77599 start.go:901] validating driver "kvm2" against <nil>
	I0403 19:33:02.428993   77599 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:33:02.429643   77599 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:33:02.429716   77599 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 19:33:02.446281   77599 install.go:137] /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0403 19:33:02.446337   77599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0403 19:33:02.446713   77599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:33:02.446763   77599 cni.go:84] Creating CNI manager for "bridge"
	I0403 19:33:02.446772   77599 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0403 19:33:02.446854   77599 start.go:340] cluster config:
	{Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:33:02.446984   77599 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:33:02.448554   77599 out.go:177] * Starting "bridge-999005" primary control-plane node in "bridge-999005" cluster
	I0403 19:33:02.449622   77599 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 19:33:02.449667   77599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0403 19:33:02.449684   77599 cache.go:56] Caching tarball of preloaded images
	I0403 19:33:02.449752   77599 preload.go:172] Found /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0403 19:33:02.449762   77599 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0403 19:33:02.449877   77599 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/config.json ...
	I0403 19:33:02.449899   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/config.json: {Name:mk2379bf0104743094b5c7dde2a4c0ad0c4e9cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:02.450060   77599 start.go:360] acquireMachinesLock for bridge-999005: {Name:mk8972215f0ab94ca7966bf5adf18262e19bccd0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0403 19:33:02.450095   77599 start.go:364] duration metric: took 19.647µs to acquireMachinesLock for "bridge-999005"
	I0403 19:33:02.450116   77599 start.go:93] Provisioning new machine with config: &{Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-999005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:33:02.450176   77599 start.go:125] createHost starting for "" (driver="kvm2")
	I0403 19:32:59.278761   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:01.780343   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:04.220006   75819 kubeadm.go:310] [api-check] The API server is healthy after 5.502485937s
	I0403 19:33:04.234838   75819 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0403 19:33:04.249952   75819 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0403 19:33:04.281515   75819 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0403 19:33:04.281698   75819 kubeadm.go:310] [mark-control-plane] Marking the node flannel-999005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0403 19:33:04.292849   75819 kubeadm.go:310] [bootstrap-token] Using token: i2opuv.2m47nf28qphn3gfh
	I0403 19:33:04.294088   75819 out.go:235]   - Configuring RBAC rules ...
	I0403 19:33:04.294250   75819 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0403 19:33:04.298299   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0403 19:33:04.306023   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0403 19:33:04.310195   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0403 19:33:04.316560   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0403 19:33:04.323542   75819 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0403 19:33:04.627539   75819 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0403 19:33:05.059216   75819 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0403 19:33:05.626881   75819 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0403 19:33:05.627833   75819 kubeadm.go:310] 
	I0403 19:33:05.627927   75819 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0403 19:33:05.627938   75819 kubeadm.go:310] 
	I0403 19:33:05.628077   75819 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0403 19:33:05.628099   75819 kubeadm.go:310] 
	I0403 19:33:05.628132   75819 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0403 19:33:05.628211   75819 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0403 19:33:05.628291   75819 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0403 19:33:05.628302   75819 kubeadm.go:310] 
	I0403 19:33:05.628386   75819 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0403 19:33:05.628396   75819 kubeadm.go:310] 
	I0403 19:33:05.628464   75819 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0403 19:33:05.628473   75819 kubeadm.go:310] 
	I0403 19:33:05.628539   75819 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0403 19:33:05.628647   75819 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0403 19:33:05.628774   75819 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0403 19:33:05.628800   75819 kubeadm.go:310] 
	I0403 19:33:05.628905   75819 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0403 19:33:05.629014   75819 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0403 19:33:05.629022   75819 kubeadm.go:310] 
	I0403 19:33:05.629117   75819 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i2opuv.2m47nf28qphn3gfh \
	I0403 19:33:05.629239   75819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 \
	I0403 19:33:05.629267   75819 kubeadm.go:310] 	--control-plane 
	I0403 19:33:05.629275   75819 kubeadm.go:310] 
	I0403 19:33:05.629382   75819 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0403 19:33:05.629391   75819 kubeadm.go:310] 
	I0403 19:33:05.629494   75819 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i2opuv.2m47nf28qphn3gfh \
	I0403 19:33:05.629630   75819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 
	I0403 19:33:05.630329   75819 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:33:05.630359   75819 cni.go:84] Creating CNI manager for "flannel"
	I0403 19:33:05.631659   75819 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0403 19:33:02.451472   77599 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0403 19:33:02.451609   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:02.451664   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:02.466285   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
	I0403 19:33:02.466761   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:02.467372   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:02.467391   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:02.467816   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:02.468014   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:02.468179   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:02.468339   77599 start.go:159] libmachine.API.Create for "bridge-999005" (driver="kvm2")
	I0403 19:33:02.468372   77599 client.go:168] LocalClient.Create starting
	I0403 19:33:02.468415   77599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem
	I0403 19:33:02.468455   77599 main.go:141] libmachine: Decoding PEM data...
	I0403 19:33:02.468481   77599 main.go:141] libmachine: Parsing certificate...
	I0403 19:33:02.468554   77599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem
	I0403 19:33:02.468582   77599 main.go:141] libmachine: Decoding PEM data...
	I0403 19:33:02.468601   77599 main.go:141] libmachine: Parsing certificate...
	I0403 19:33:02.468620   77599 main.go:141] libmachine: Running pre-create checks...
	I0403 19:33:02.468639   77599 main.go:141] libmachine: (bridge-999005) Calling .PreCreateCheck
	I0403 19:33:02.468953   77599 main.go:141] libmachine: (bridge-999005) Calling .GetConfigRaw
	I0403 19:33:02.469335   77599 main.go:141] libmachine: Creating machine...
	I0403 19:33:02.469347   77599 main.go:141] libmachine: (bridge-999005) Calling .Create
	I0403 19:33:02.469470   77599 main.go:141] libmachine: (bridge-999005) creating KVM machine...
	I0403 19:33:02.469485   77599 main.go:141] libmachine: (bridge-999005) creating network...
	I0403 19:33:02.470738   77599 main.go:141] libmachine: (bridge-999005) DBG | found existing default KVM network
	I0403 19:33:02.472415   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:02.472249   77621 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123820}
	I0403 19:33:02.472448   77599 main.go:141] libmachine: (bridge-999005) DBG | created network xml: 
	I0403 19:33:02.472470   77599 main.go:141] libmachine: (bridge-999005) DBG | <network>
	I0403 19:33:02.472483   77599 main.go:141] libmachine: (bridge-999005) DBG |   <name>mk-bridge-999005</name>
	I0403 19:33:02.472494   77599 main.go:141] libmachine: (bridge-999005) DBG |   <dns enable='no'/>
	I0403 19:33:02.472504   77599 main.go:141] libmachine: (bridge-999005) DBG |   
	I0403 19:33:02.472515   77599 main.go:141] libmachine: (bridge-999005) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0403 19:33:02.472526   77599 main.go:141] libmachine: (bridge-999005) DBG |     <dhcp>
	I0403 19:33:02.472534   77599 main.go:141] libmachine: (bridge-999005) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0403 19:33:02.472550   77599 main.go:141] libmachine: (bridge-999005) DBG |     </dhcp>
	I0403 19:33:02.472564   77599 main.go:141] libmachine: (bridge-999005) DBG |   </ip>
	I0403 19:33:02.472577   77599 main.go:141] libmachine: (bridge-999005) DBG |   
	I0403 19:33:02.472586   77599 main.go:141] libmachine: (bridge-999005) DBG | </network>
	I0403 19:33:02.472596   77599 main.go:141] libmachine: (bridge-999005) DBG | 
	I0403 19:33:02.477381   77599 main.go:141] libmachine: (bridge-999005) DBG | trying to create private KVM network mk-bridge-999005 192.168.39.0/24...
	I0403 19:33:02.549445   77599 main.go:141] libmachine: (bridge-999005) setting up store path in /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005 ...
	I0403 19:33:02.549483   77599 main.go:141] libmachine: (bridge-999005) DBG | private KVM network mk-bridge-999005 192.168.39.0/24 created
	I0403 19:33:02.549497   77599 main.go:141] libmachine: (bridge-999005) building disk image from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0403 19:33:02.549523   77599 main.go:141] libmachine: (bridge-999005) Downloading /home/jenkins/minikube-integration/20591-14371/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0403 19:33:02.549542   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:02.549359   77621 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:33:02.808436   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:02.808274   77621 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa...
	I0403 19:33:03.010631   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:03.010517   77621 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/bridge-999005.rawdisk...
	I0403 19:33:03.010661   77599 main.go:141] libmachine: (bridge-999005) DBG | Writing magic tar header
	I0403 19:33:03.010671   77599 main.go:141] libmachine: (bridge-999005) DBG | Writing SSH key tar header
	I0403 19:33:03.010768   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:03.010673   77621 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005 ...
	I0403 19:33:03.010855   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005
	I0403 19:33:03.010883   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005 (perms=drwx------)
	I0403 19:33:03.010899   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines (perms=drwxr-xr-x)
	I0403 19:33:03.010913   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines
	I0403 19:33:03.010948   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:33:03.010961   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371
	I0403 19:33:03.010974   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0403 19:33:03.010993   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube (perms=drwxr-xr-x)
	I0403 19:33:03.011002   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins
	I0403 19:33:03.011012   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home
	I0403 19:33:03.011022   77599 main.go:141] libmachine: (bridge-999005) DBG | skipping /home - not owner
	I0403 19:33:03.011036   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371 (perms=drwxrwxr-x)
	I0403 19:33:03.011047   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0403 19:33:03.011061   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0403 19:33:03.011071   77599 main.go:141] libmachine: (bridge-999005) creating domain...
	I0403 19:33:03.012376   77599 main.go:141] libmachine: (bridge-999005) define libvirt domain using xml: 
	I0403 19:33:03.012401   77599 main.go:141] libmachine: (bridge-999005) <domain type='kvm'>
	I0403 19:33:03.012412   77599 main.go:141] libmachine: (bridge-999005)   <name>bridge-999005</name>
	I0403 19:33:03.012421   77599 main.go:141] libmachine: (bridge-999005)   <memory unit='MiB'>3072</memory>
	I0403 19:33:03.012429   77599 main.go:141] libmachine: (bridge-999005)   <vcpu>2</vcpu>
	I0403 19:33:03.012436   77599 main.go:141] libmachine: (bridge-999005)   <features>
	I0403 19:33:03.012444   77599 main.go:141] libmachine: (bridge-999005)     <acpi/>
	I0403 19:33:03.012452   77599 main.go:141] libmachine: (bridge-999005)     <apic/>
	I0403 19:33:03.012461   77599 main.go:141] libmachine: (bridge-999005)     <pae/>
	I0403 19:33:03.012468   77599 main.go:141] libmachine: (bridge-999005)     
	I0403 19:33:03.012473   77599 main.go:141] libmachine: (bridge-999005)   </features>
	I0403 19:33:03.012482   77599 main.go:141] libmachine: (bridge-999005)   <cpu mode='host-passthrough'>
	I0403 19:33:03.012508   77599 main.go:141] libmachine: (bridge-999005)   
	I0403 19:33:03.012524   77599 main.go:141] libmachine: (bridge-999005)   </cpu>
	I0403 19:33:03.012549   77599 main.go:141] libmachine: (bridge-999005)   <os>
	I0403 19:33:03.012572   77599 main.go:141] libmachine: (bridge-999005)     <type>hvm</type>
	I0403 19:33:03.012588   77599 main.go:141] libmachine: (bridge-999005)     <boot dev='cdrom'/>
	I0403 19:33:03.012606   77599 main.go:141] libmachine: (bridge-999005)     <boot dev='hd'/>
	I0403 19:33:03.012615   77599 main.go:141] libmachine: (bridge-999005)     <bootmenu enable='no'/>
	I0403 19:33:03.012622   77599 main.go:141] libmachine: (bridge-999005)   </os>
	I0403 19:33:03.012630   77599 main.go:141] libmachine: (bridge-999005)   <devices>
	I0403 19:33:03.012641   77599 main.go:141] libmachine: (bridge-999005)     <disk type='file' device='cdrom'>
	I0403 19:33:03.012653   77599 main.go:141] libmachine: (bridge-999005)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/boot2docker.iso'/>
	I0403 19:33:03.012670   77599 main.go:141] libmachine: (bridge-999005)       <target dev='hdc' bus='scsi'/>
	I0403 19:33:03.012679   77599 main.go:141] libmachine: (bridge-999005)       <readonly/>
	I0403 19:33:03.012701   77599 main.go:141] libmachine: (bridge-999005)     </disk>
	I0403 19:33:03.012714   77599 main.go:141] libmachine: (bridge-999005)     <disk type='file' device='disk'>
	I0403 19:33:03.012725   77599 main.go:141] libmachine: (bridge-999005)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0403 19:33:03.012745   77599 main.go:141] libmachine: (bridge-999005)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/bridge-999005.rawdisk'/>
	I0403 19:33:03.012755   77599 main.go:141] libmachine: (bridge-999005)       <target dev='hda' bus='virtio'/>
	I0403 19:33:03.012769   77599 main.go:141] libmachine: (bridge-999005)     </disk>
	I0403 19:33:03.012801   77599 main.go:141] libmachine: (bridge-999005)     <interface type='network'>
	I0403 19:33:03.012814   77599 main.go:141] libmachine: (bridge-999005)       <source network='mk-bridge-999005'/>
	I0403 19:33:03.012822   77599 main.go:141] libmachine: (bridge-999005)       <model type='virtio'/>
	I0403 19:33:03.012827   77599 main.go:141] libmachine: (bridge-999005)     </interface>
	I0403 19:33:03.012834   77599 main.go:141] libmachine: (bridge-999005)     <interface type='network'>
	I0403 19:33:03.012839   77599 main.go:141] libmachine: (bridge-999005)       <source network='default'/>
	I0403 19:33:03.012846   77599 main.go:141] libmachine: (bridge-999005)       <model type='virtio'/>
	I0403 19:33:03.012851   77599 main.go:141] libmachine: (bridge-999005)     </interface>
	I0403 19:33:03.012856   77599 main.go:141] libmachine: (bridge-999005)     <serial type='pty'>
	I0403 19:33:03.012863   77599 main.go:141] libmachine: (bridge-999005)       <target port='0'/>
	I0403 19:33:03.012888   77599 main.go:141] libmachine: (bridge-999005)     </serial>
	I0403 19:33:03.012900   77599 main.go:141] libmachine: (bridge-999005)     <console type='pty'>
	I0403 19:33:03.012911   77599 main.go:141] libmachine: (bridge-999005)       <target type='serial' port='0'/>
	I0403 19:33:03.012924   77599 main.go:141] libmachine: (bridge-999005)     </console>
	I0403 19:33:03.012929   77599 main.go:141] libmachine: (bridge-999005)     <rng model='virtio'>
	I0403 19:33:03.012935   77599 main.go:141] libmachine: (bridge-999005)       <backend model='random'>/dev/random</backend>
	I0403 19:33:03.012939   77599 main.go:141] libmachine: (bridge-999005)     </rng>
	I0403 19:33:03.012943   77599 main.go:141] libmachine: (bridge-999005)     
	I0403 19:33:03.012956   77599 main.go:141] libmachine: (bridge-999005)     
	I0403 19:33:03.012985   77599 main.go:141] libmachine: (bridge-999005)   </devices>
	I0403 19:33:03.013008   77599 main.go:141] libmachine: (bridge-999005) </domain>
	I0403 19:33:03.013039   77599 main.go:141] libmachine: (bridge-999005) 
	I0403 19:33:03.017100   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:76:dd:cb in network default
	I0403 19:33:03.017850   77599 main.go:141] libmachine: (bridge-999005) starting domain...
	I0403 19:33:03.017875   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:03.017883   77599 main.go:141] libmachine: (bridge-999005) ensuring networks are active...
	I0403 19:33:03.018620   77599 main.go:141] libmachine: (bridge-999005) Ensuring network default is active
	I0403 19:33:03.018960   77599 main.go:141] libmachine: (bridge-999005) Ensuring network mk-bridge-999005 is active
	I0403 19:33:03.019610   77599 main.go:141] libmachine: (bridge-999005) getting domain XML...
	I0403 19:33:03.020474   77599 main.go:141] libmachine: (bridge-999005) creating domain...
	I0403 19:33:04.308608   77599 main.go:141] libmachine: (bridge-999005) waiting for IP...
	I0403 19:33:04.309508   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:04.310076   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:04.310237   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:04.310154   77621 retry.go:31] will retry after 304.11605ms: waiting for domain to come up
	I0403 19:33:04.615460   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:04.616072   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:04.616105   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:04.616027   77621 retry.go:31] will retry after 352.836416ms: waiting for domain to come up
	I0403 19:33:04.970906   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:04.971506   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:04.971580   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:04.971492   77621 retry.go:31] will retry after 384.292797ms: waiting for domain to come up
	I0403 19:33:05.357155   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:05.357783   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:05.357804   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:05.357746   77621 retry.go:31] will retry after 593.108014ms: waiting for domain to come up
	I0403 19:33:05.953253   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:05.953908   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:05.953955   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:05.953851   77621 retry.go:31] will retry after 715.405514ms: waiting for domain to come up
	I0403 19:33:06.671416   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:06.671869   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:06.671893   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:06.671849   77621 retry.go:31] will retry after 643.974958ms: waiting for domain to come up
	I0403 19:33:07.317681   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:07.318083   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:07.318111   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:07.318044   77621 retry.go:31] will retry after 830.836827ms: waiting for domain to come up
	I0403 19:33:04.278957   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:06.279442   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:05.632586   75819 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0403 19:33:05.638039   75819 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0403 19:33:05.638061   75819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0403 19:33:05.665102   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0403 19:33:06.148083   75819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0403 19:33:06.148182   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:06.148222   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-999005 minikube.k8s.io/updated_at=2025_04_03T19_33_06_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053 minikube.k8s.io/name=flannel-999005 minikube.k8s.io/primary=true
	I0403 19:33:06.328677   75819 ops.go:34] apiserver oom_adj: -16
	I0403 19:33:06.328804   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:06.829687   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:07.329161   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:07.829420   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:08.328906   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:08.828872   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:09.328884   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:09.829539   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:09.973416   75819 kubeadm.go:1113] duration metric: took 3.825298406s to wait for elevateKubeSystemPrivileges
	I0403 19:33:09.973463   75819 kubeadm.go:394] duration metric: took 14.815036163s to StartCluster
	I0403 19:33:09.973485   75819 settings.go:142] acquiring lock: {Name:mk92384ef10350a2ea8b1710e0b74c72a0214398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:09.973557   75819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:33:09.974857   75819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/kubeconfig: {Name:mk73cf36c30cb8628f68d205e8cb0818a9975b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:09.975109   75819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0403 19:33:09.975113   75819 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:33:09.975194   75819 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0403 19:33:09.975291   75819 addons.go:69] Setting storage-provisioner=true in profile "flannel-999005"
	I0403 19:33:09.975313   75819 addons.go:238] Setting addon storage-provisioner=true in "flannel-999005"
	I0403 19:33:09.975344   75819 host.go:66] Checking if "flannel-999005" exists ...
	I0403 19:33:09.975339   75819 addons.go:69] Setting default-storageclass=true in profile "flannel-999005"
	I0403 19:33:09.975359   75819 config.go:182] Loaded profile config "flannel-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:09.975366   75819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-999005"
	I0403 19:33:09.975856   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:09.975875   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:09.975897   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:09.975907   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:09.976893   75819 out.go:177] * Verifying Kubernetes components...
	I0403 19:33:09.978410   75819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:09.995627   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44917
	I0403 19:33:09.995731   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46853
	I0403 19:33:09.996071   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:09.996180   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:09.996730   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:09.996748   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:09.996880   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:09.996905   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:09.997268   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:09.997310   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:09.997479   75819 main.go:141] libmachine: (flannel-999005) Calling .GetState
	I0403 19:33:09.997886   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:09.997934   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:10.001151   75819 addons.go:238] Setting addon default-storageclass=true in "flannel-999005"
	I0403 19:33:10.001199   75819 host.go:66] Checking if "flannel-999005" exists ...
	I0403 19:33:10.001557   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:10.001587   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:10.014104   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0403 19:33:10.014524   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:10.015098   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:10.015123   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:10.015470   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:10.015720   75819 main.go:141] libmachine: (flannel-999005) Calling .GetState
	I0403 19:33:10.017793   75819 main.go:141] libmachine: (flannel-999005) Calling .DriverName
	I0403 19:33:10.019942   75819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:33:10.021057   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0403 19:33:10.021158   75819 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:10.021177   75819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0403 19:33:10.021200   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHHostname
	I0403 19:33:10.021505   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:10.021986   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:10.022001   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:10.022291   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:10.022934   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:10.022978   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:10.024920   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.025474   75819 main.go:141] libmachine: (flannel-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:2c", ip: ""} in network mk-flannel-999005: {Iface:virbr4 ExpiryTime:2025-04-03 20:32:40 +0000 UTC Type:0 Mac:52:54:00:f9:eb:2c Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:flannel-999005 Clientid:01:52:54:00:f9:eb:2c}
	I0403 19:33:10.025494   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined IP address 192.168.72.34 and MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.025764   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHPort
	I0403 19:33:10.025935   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHKeyPath
	I0403 19:33:10.026060   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHUsername
	I0403 19:33:10.026152   75819 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/flannel-999005/id_rsa Username:docker}
	I0403 19:33:10.038389   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0403 19:33:10.038851   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:10.039336   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:10.039352   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:10.039758   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:10.039925   75819 main.go:141] libmachine: (flannel-999005) Calling .GetState
	I0403 19:33:10.041799   75819 main.go:141] libmachine: (flannel-999005) Calling .DriverName
	I0403 19:33:10.041979   75819 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:10.041991   75819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0403 19:33:10.042006   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHHostname
	I0403 19:33:10.045247   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.045722   75819 main.go:141] libmachine: (flannel-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:2c", ip: ""} in network mk-flannel-999005: {Iface:virbr4 ExpiryTime:2025-04-03 20:32:40 +0000 UTC Type:0 Mac:52:54:00:f9:eb:2c Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:flannel-999005 Clientid:01:52:54:00:f9:eb:2c}
	I0403 19:33:10.045806   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined IP address 192.168.72.34 and MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.046067   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHPort
	I0403 19:33:10.046226   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHKeyPath
	I0403 19:33:10.046320   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHUsername
	I0403 19:33:10.046486   75819 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/flannel-999005/id_rsa Username:docker}
	I0403 19:33:10.299013   75819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:10.317655   75819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:10.338018   75819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:33:10.338068   75819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0403 19:33:10.862171   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862198   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.862252   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862291   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.862394   75819 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0403 19:33:10.862529   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.862607   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.862646   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.862682   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.862685   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.862709   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862717   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.862733   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.862743   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862756   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.863008   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.863020   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.863023   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.863228   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.863249   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.863680   75819 node_ready.go:35] waiting up to 15m0s for node "flannel-999005" to be "Ready" ...
	I0403 19:33:10.884450   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.884469   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.884725   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.884743   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.884768   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.886301   75819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0403 19:33:08.779259   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:10.778746   73990 pod_ready.go:93] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.778767   73990 pod_ready.go:82] duration metric: took 38.005486758s for pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.778775   73990 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.781400   73990 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-nthv6" not found
	I0403 19:33:10.781418   73990 pod_ready.go:82] duration metric: took 2.637243ms for pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace to be "Ready" ...
	E0403 19:33:10.781427   73990 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-nthv6" not found
	I0403 19:33:10.781433   73990 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.785172   73990 pod_ready.go:93] pod "etcd-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.785192   73990 pod_ready.go:82] duration metric: took 3.752808ms for pod "etcd-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.785207   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.788834   73990 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.788850   73990 pod_ready.go:82] duration metric: took 3.634986ms for pod "kube-apiserver-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.788861   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.793809   73990 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.793831   73990 pod_ready.go:82] duration metric: took 4.96233ms for pod "kube-controller-manager-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.793843   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-mzxck" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.977165   73990 pod_ready.go:93] pod "kube-proxy-mzxck" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.977191   73990 pod_ready.go:82] duration metric: took 183.339442ms for pod "kube-proxy-mzxck" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.977209   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:11.377090   73990 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:11.377122   73990 pod_ready.go:82] duration metric: took 399.903527ms for pod "kube-scheduler-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:11.377135   73990 pod_ready.go:39] duration metric: took 38.606454546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:11.377156   73990 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:33:11.377225   73990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:33:11.399542   73990 api_server.go:72] duration metric: took 38.946574315s to wait for apiserver process to appear ...
	I0403 19:33:11.399566   73990 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:33:11.399582   73990 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I0403 19:33:11.405734   73990 api_server.go:279] https://192.168.50.55:8443/healthz returned 200:
	ok
	I0403 19:33:11.406888   73990 api_server.go:141] control plane version: v1.32.2
	I0403 19:33:11.406910   73990 api_server.go:131] duration metric: took 7.338515ms to wait for apiserver health ...
	I0403 19:33:11.406918   73990 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:33:11.582871   73990 system_pods.go:59] 7 kube-system pods found
	I0403 19:33:11.582912   73990 system_pods.go:61] "coredns-668d6bf9bc-2vwz9" [e83c5e99-c2f0-4228-bc84-d048bd7dba97] Running
	I0403 19:33:11.582920   73990 system_pods.go:61] "etcd-enable-default-cni-999005" [201225ab-9372-41eb-9c78-a52f125b0435] Running
	I0403 19:33:11.582927   73990 system_pods.go:61] "kube-apiserver-enable-default-cni-999005" [f3e9e4a1-810a-423a-8e08-35d311067324] Running
	I0403 19:33:11.582933   73990 system_pods.go:61] "kube-controller-manager-enable-default-cni-999005" [0b827b54-1569-4c8e-a582-ec0fd8e97cbc] Running
	I0403 19:33:11.582938   73990 system_pods.go:61] "kube-proxy-mzxck" [6c2874ed-9e8f-4222-87c3-fe23d207134c] Running
	I0403 19:33:11.582943   73990 system_pods.go:61] "kube-scheduler-enable-default-cni-999005" [e5d0c29c-06fc-4614-a107-51917236c60c] Running
	I0403 19:33:11.582949   73990 system_pods.go:61] "storage-provisioner" [6fab90c6-1563-4504-83d8-443f80cfb99c] Running
	I0403 19:33:11.582957   73990 system_pods.go:74] duration metric: took 176.033201ms to wait for pod list to return data ...
	I0403 19:33:11.582971   73990 default_sa.go:34] waiting for default service account to be created ...
	I0403 19:33:11.777789   73990 default_sa.go:45] found service account: "default"
	I0403 19:33:11.777811   73990 default_sa.go:55] duration metric: took 194.83101ms for default service account to be created ...
	I0403 19:33:11.777819   73990 system_pods.go:116] waiting for k8s-apps to be running ...
	I0403 19:33:11.977547   73990 system_pods.go:86] 7 kube-system pods found
	I0403 19:33:11.977583   73990 system_pods.go:89] "coredns-668d6bf9bc-2vwz9" [e83c5e99-c2f0-4228-bc84-d048bd7dba97] Running
	I0403 19:33:11.977592   73990 system_pods.go:89] "etcd-enable-default-cni-999005" [201225ab-9372-41eb-9c78-a52f125b0435] Running
	I0403 19:33:11.977599   73990 system_pods.go:89] "kube-apiserver-enable-default-cni-999005" [f3e9e4a1-810a-423a-8e08-35d311067324] Running
	I0403 19:33:11.977605   73990 system_pods.go:89] "kube-controller-manager-enable-default-cni-999005" [0b827b54-1569-4c8e-a582-ec0fd8e97cbc] Running
	I0403 19:33:11.977609   73990 system_pods.go:89] "kube-proxy-mzxck" [6c2874ed-9e8f-4222-87c3-fe23d207134c] Running
	I0403 19:33:11.977615   73990 system_pods.go:89] "kube-scheduler-enable-default-cni-999005" [e5d0c29c-06fc-4614-a107-51917236c60c] Running
	I0403 19:33:11.977620   73990 system_pods.go:89] "storage-provisioner" [6fab90c6-1563-4504-83d8-443f80cfb99c] Running
	I0403 19:33:11.977629   73990 system_pods.go:126] duration metric: took 199.803644ms to wait for k8s-apps to be running ...
	I0403 19:33:11.977643   73990 system_svc.go:44] waiting for kubelet service to be running ....
	I0403 19:33:11.977695   73990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:33:11.993125   73990 system_svc.go:56] duration metric: took 15.471997ms WaitForService to wait for kubelet
	I0403 19:33:11.993158   73990 kubeadm.go:582] duration metric: took 39.540195871s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:33:11.993188   73990 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:33:12.176775   73990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:33:12.176803   73990 node_conditions.go:123] node cpu capacity is 2
	I0403 19:33:12.176814   73990 node_conditions.go:105] duration metric: took 183.620688ms to run NodePressure ...
	I0403 19:33:12.176824   73990 start.go:241] waiting for startup goroutines ...
	I0403 19:33:12.176832   73990 start.go:246] waiting for cluster config update ...
	I0403 19:33:12.176840   73990 start.go:255] writing updated cluster config ...
	I0403 19:33:12.177113   73990 ssh_runner.go:195] Run: rm -f paused
	I0403 19:33:12.225807   73990 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0403 19:33:12.228521   73990 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-999005" cluster and "default" namespace by default
	I0403 19:33:08.150408   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:08.151003   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:08.151075   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:08.150981   77621 retry.go:31] will retry after 1.152427701s: waiting for domain to come up
	I0403 19:33:09.305349   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:09.305908   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:09.305936   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:09.305883   77621 retry.go:31] will retry after 1.688969841s: waiting for domain to come up
	I0403 19:33:10.996123   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:10.996600   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:10.996677   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:10.996605   77621 retry.go:31] will retry after 1.643659414s: waiting for domain to come up
	I0403 19:33:10.887137   75819 addons.go:514] duration metric: took 911.958897ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0403 19:33:11.366941   75819 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-999005" context rescaled to 1 replicas
	I0403 19:33:12.867785   75819 node_ready.go:53] node "flannel-999005" has status "Ready":"False"
	I0403 19:33:13.333186   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:33:13.333452   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:33:12.642410   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:12.642945   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:12.642979   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:12.642914   77621 retry.go:31] will retry after 2.077428265s: waiting for domain to come up
	I0403 19:33:14.722084   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:14.722568   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:14.722595   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:14.722556   77621 retry.go:31] will retry after 2.731919508s: waiting for domain to come up
	I0403 19:33:15.367030   75819 node_ready.go:53] node "flannel-999005" has status "Ready":"False"
	I0403 19:33:15.866309   75819 node_ready.go:49] node "flannel-999005" has status "Ready":"True"
	I0403 19:33:15.866339   75819 node_ready.go:38] duration metric: took 5.002629932s for node "flannel-999005" to be "Ready" ...
	I0403 19:33:15.866351   75819 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:15.878526   75819 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:17.884431   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:17.457578   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:17.458158   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:17.458186   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:17.458134   77621 retry.go:31] will retry after 2.937911428s: waiting for domain to come up
	I0403 19:33:20.397025   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:20.397485   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:20.397542   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:20.397476   77621 retry.go:31] will retry after 4.371309871s: waiting for domain to come up
	I0403 19:33:20.384008   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:22.384126   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:24.384580   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:24.771404   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.771836   77599 main.go:141] libmachine: (bridge-999005) found domain IP: 192.168.39.185
	I0403 19:33:24.771856   77599 main.go:141] libmachine: (bridge-999005) reserving static IP address...
	I0403 19:33:24.771868   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has current primary IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.772259   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find host DHCP lease matching {name: "bridge-999005", mac: "52:54:00:7a:d8:f7", ip: "192.168.39.185"} in network mk-bridge-999005
	I0403 19:33:24.855210   77599 main.go:141] libmachine: (bridge-999005) reserved static IP address 192.168.39.185 for domain bridge-999005
	I0403 19:33:24.855240   77599 main.go:141] libmachine: (bridge-999005) waiting for SSH...
	I0403 19:33:24.855250   77599 main.go:141] libmachine: (bridge-999005) DBG | Getting to WaitForSSH function...
	I0403 19:33:24.858175   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.858563   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:24.858592   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.858757   77599 main.go:141] libmachine: (bridge-999005) DBG | Using SSH client type: external
	I0403 19:33:24.858784   77599 main.go:141] libmachine: (bridge-999005) DBG | Using SSH private key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa (-rw-------)
	I0403 19:33:24.858847   77599 main.go:141] libmachine: (bridge-999005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0403 19:33:24.858868   77599 main.go:141] libmachine: (bridge-999005) DBG | About to run SSH command:
	I0403 19:33:24.858885   77599 main.go:141] libmachine: (bridge-999005) DBG | exit 0
	I0403 19:33:24.991462   77599 main.go:141] libmachine: (bridge-999005) DBG | SSH cmd err, output: <nil>: 
	I0403 19:33:24.991735   77599 main.go:141] libmachine: (bridge-999005) KVM machine creation complete
	I0403 19:33:24.992066   77599 main.go:141] libmachine: (bridge-999005) Calling .GetConfigRaw
	I0403 19:33:24.992629   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:24.992815   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:24.992938   77599 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0403 19:33:24.992952   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:24.994308   77599 main.go:141] libmachine: Detecting operating system of created instance...
	I0403 19:33:24.994326   77599 main.go:141] libmachine: Waiting for SSH to be available...
	I0403 19:33:24.994333   77599 main.go:141] libmachine: Getting to WaitForSSH function...
	I0403 19:33:24.994341   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:24.996876   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.997275   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:24.997304   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.997503   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:24.997680   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:24.997873   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:24.998025   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:24.998208   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:24.998408   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:24.998420   77599 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0403 19:33:25.106052   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:33:25.106078   77599 main.go:141] libmachine: Detecting the provisioner...
	I0403 19:33:25.106088   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.109437   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.109896   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.109925   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.110110   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.110294   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.110467   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.110624   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.110813   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.111134   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.111153   77599 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0403 19:33:25.216086   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0403 19:33:25.216142   77599 main.go:141] libmachine: found compatible host: buildroot
	I0403 19:33:25.216151   77599 main.go:141] libmachine: Provisioning with buildroot...
	I0403 19:33:25.216159   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:25.216374   77599 buildroot.go:166] provisioning hostname "bridge-999005"
	I0403 19:33:25.216401   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:25.216572   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.219422   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.219818   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.219856   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.219955   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.220119   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.220285   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.220404   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.220574   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.220845   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.220870   77599 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-999005 && echo "bridge-999005" | sudo tee /etc/hostname
	I0403 19:33:25.342189   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-999005
	
	I0403 19:33:25.342213   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.344813   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.345183   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.345211   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.345371   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.345582   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.345760   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.345918   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.346073   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.346281   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.346303   77599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-999005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-999005/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-999005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0403 19:33:25.458885   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:33:25.458914   77599 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20591-14371/.minikube CaCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20591-14371/.minikube}
	I0403 19:33:25.458936   77599 buildroot.go:174] setting up certificates
	I0403 19:33:25.458946   77599 provision.go:84] configureAuth start
	I0403 19:33:25.458954   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:25.459254   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:25.461901   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.462300   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.462326   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.462424   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.464888   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.465249   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.465284   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.465492   77599 provision.go:143] copyHostCerts
	I0403 19:33:25.465551   77599 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem, removing ...
	I0403 19:33:25.465580   77599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem
	I0403 19:33:25.465662   77599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem (1082 bytes)
	I0403 19:33:25.465795   77599 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem, removing ...
	I0403 19:33:25.465805   77599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem
	I0403 19:33:25.465835   77599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem (1123 bytes)
	I0403 19:33:25.465951   77599 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem, removing ...
	I0403 19:33:25.465960   77599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem
	I0403 19:33:25.465984   77599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem (1675 bytes)
	I0403 19:33:25.466044   77599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem org=jenkins.bridge-999005 san=[127.0.0.1 192.168.39.185 bridge-999005 localhost minikube]
	I0403 19:33:25.774649   77599 provision.go:177] copyRemoteCerts
	I0403 19:33:25.774710   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0403 19:33:25.774731   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.777197   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.777576   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.777599   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.777795   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.777962   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.778108   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.778212   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:25.860653   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0403 19:33:25.882849   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0403 19:33:25.904559   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0403 19:33:25.926431   77599 provision.go:87] duration metric: took 467.475481ms to configureAuth
	I0403 19:33:25.926455   77599 buildroot.go:189] setting minikube options for container-runtime
	I0403 19:33:25.926650   77599 config.go:182] Loaded profile config "bridge-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:25.926725   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.929371   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.929809   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.929838   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.930028   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.930213   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.930335   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.930463   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.930620   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.930837   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.930859   77599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0403 19:33:26.149645   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0403 19:33:26.149674   77599 main.go:141] libmachine: Checking connection to Docker...
	I0403 19:33:26.149683   77599 main.go:141] libmachine: (bridge-999005) Calling .GetURL
	I0403 19:33:26.151048   77599 main.go:141] libmachine: (bridge-999005) DBG | using libvirt version 6000000
	I0403 19:33:26.153703   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.154090   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.154119   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.154326   77599 main.go:141] libmachine: Docker is up and running!
	I0403 19:33:26.154341   77599 main.go:141] libmachine: Reticulating splines...
	I0403 19:33:26.154349   77599 client.go:171] duration metric: took 23.685966388s to LocalClient.Create
	I0403 19:33:26.154377   77599 start.go:167] duration metric: took 23.686038349s to libmachine.API.Create "bridge-999005"
	I0403 19:33:26.154389   77599 start.go:293] postStartSetup for "bridge-999005" (driver="kvm2")
	I0403 19:33:26.154402   77599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0403 19:33:26.154427   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.154672   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0403 19:33:26.154704   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.156992   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.157408   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.157429   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.157561   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.157730   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.157866   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.157997   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:26.241074   77599 ssh_runner.go:195] Run: cat /etc/os-release
	I0403 19:33:26.245234   77599 info.go:137] Remote host: Buildroot 2023.02.9
	I0403 19:33:26.245256   77599 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/addons for local assets ...
	I0403 19:33:26.245308   77599 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/files for local assets ...
	I0403 19:33:26.245384   77599 filesync.go:149] local asset: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem -> 215522.pem in /etc/ssl/certs
	I0403 19:33:26.245467   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0403 19:33:26.255926   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:33:26.280402   77599 start.go:296] duration metric: took 125.998084ms for postStartSetup
	I0403 19:33:26.280453   77599 main.go:141] libmachine: (bridge-999005) Calling .GetConfigRaw
	I0403 19:33:26.281006   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:26.283814   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.284161   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.284198   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.284452   77599 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/config.json ...
	I0403 19:33:26.284648   77599 start.go:128] duration metric: took 23.834461991s to createHost
	I0403 19:33:26.284669   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.286766   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.287110   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.287143   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.287319   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.287485   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.287642   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.287742   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.287917   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:26.288126   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:26.288141   77599 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0403 19:33:26.391168   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743708806.364931884
	
	I0403 19:33:26.391188   77599 fix.go:216] guest clock: 1743708806.364931884
	I0403 19:33:26.391194   77599 fix.go:229] Guest: 2025-04-03 19:33:26.364931884 +0000 UTC Remote: 2025-04-03 19:33:26.284659648 +0000 UTC m=+23.944823978 (delta=80.272236ms)
	I0403 19:33:26.391222   77599 fix.go:200] guest clock delta is within tolerance: 80.272236ms
	I0403 19:33:26.391226   77599 start.go:83] releasing machines lock for "bridge-999005", held for 23.941120784s
	I0403 19:33:26.391243   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.391495   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:26.393938   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.394286   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.394329   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.394501   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.394952   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.395143   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.395256   77599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0403 19:33:26.395299   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.395400   77599 ssh_runner.go:195] Run: cat /version.json
	I0403 19:33:26.395433   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.397923   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.398466   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.398524   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.398551   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.399177   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.399375   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.399399   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.399434   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.399582   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.399687   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.399711   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:26.399801   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.399953   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.400091   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:26.511483   77599 ssh_runner.go:195] Run: systemctl --version
	I0403 19:33:26.517463   77599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0403 19:33:26.670834   77599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0403 19:33:26.676690   77599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0403 19:33:26.676757   77599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0403 19:33:26.693357   77599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0403 19:33:26.693383   77599 start.go:495] detecting cgroup driver to use...
	I0403 19:33:26.693442   77599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0403 19:33:26.711536   77599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0403 19:33:26.727184   77599 docker.go:217] disabling cri-docker service (if available) ...
	I0403 19:33:26.727244   77599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0403 19:33:26.744189   77599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0403 19:33:26.758114   77599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0403 19:33:26.874699   77599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0403 19:33:27.029147   77599 docker.go:233] disabling docker service ...
	I0403 19:33:27.029214   77599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0403 19:33:27.042778   77599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0403 19:33:27.056884   77599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0403 19:33:27.165758   77599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0403 19:33:27.283993   77599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0403 19:33:27.297495   77599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0403 19:33:27.315338   77599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0403 19:33:27.315392   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.325005   77599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0403 19:33:27.325056   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.334776   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.345113   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.355007   77599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0403 19:33:27.364955   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.374894   77599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.391740   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.401813   77599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0403 19:33:27.411004   77599 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0403 19:33:27.411051   77599 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0403 19:33:27.423701   77599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0403 19:33:27.432566   77599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:27.549830   77599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0403 19:33:27.639431   77599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0403 19:33:27.639494   77599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0403 19:33:27.644011   77599 start.go:563] Will wait 60s for crictl version
	I0403 19:33:27.644059   77599 ssh_runner.go:195] Run: which crictl
	I0403 19:33:27.647488   77599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0403 19:33:27.684002   77599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0403 19:33:27.684079   77599 ssh_runner.go:195] Run: crio --version
	I0403 19:33:27.714223   77599 ssh_runner.go:195] Run: crio --version
	I0403 19:33:27.741585   77599 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0403 19:33:26.884187   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:28.885446   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:30.384628   75819 pod_ready.go:93] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.384654   75819 pod_ready.go:82] duration metric: took 14.506093364s for pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.384666   75819 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.391041   75819 pod_ready.go:93] pod "etcd-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.391069   75819 pod_ready.go:82] duration metric: took 6.395099ms for pod "etcd-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.391082   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.396442   75819 pod_ready.go:93] pod "kube-apiserver-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.396465   75819 pod_ready.go:82] duration metric: took 5.374496ms for pod "kube-apiserver-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.396475   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.403106   75819 pod_ready.go:93] pod "kube-controller-manager-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.403125   75819 pod_ready.go:82] duration metric: took 6.641201ms for pod "kube-controller-manager-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.403137   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5wp5x" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.407151   75819 pod_ready.go:93] pod "kube-proxy-5wp5x" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.407185   75819 pod_ready.go:82] duration metric: took 4.039313ms for pod "kube-proxy-5wp5x" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.407197   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.782264   75819 pod_ready.go:93] pod "kube-scheduler-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.782294   75819 pod_ready.go:82] duration metric: took 375.086145ms for pod "kube-scheduler-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.782309   75819 pod_ready.go:39] duration metric: took 14.915929273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:30.782329   75819 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:33:30.782393   75819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:33:30.798036   75819 api_server.go:72] duration metric: took 20.822884639s to wait for apiserver process to appear ...
	I0403 19:33:30.798067   75819 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:33:30.798089   75819 api_server.go:253] Checking apiserver healthz at https://192.168.72.34:8443/healthz ...
	I0403 19:33:30.803997   75819 api_server.go:279] https://192.168.72.34:8443/healthz returned 200:
	ok
	I0403 19:33:30.805211   75819 api_server.go:141] control plane version: v1.32.2
	I0403 19:33:30.805239   75819 api_server.go:131] duration metric: took 7.159207ms to wait for apiserver health ...
	I0403 19:33:30.805248   75819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:33:30.983942   75819 system_pods.go:59] 7 kube-system pods found
	I0403 19:33:30.984001   75819 system_pods.go:61] "coredns-668d6bf9bc-qxf6t" [c2f4058a-3dd8-4489-8fbc-05a2270375e4] Running
	I0403 19:33:30.984009   75819 system_pods.go:61] "etcd-flannel-999005" [67a1995c-eb31-4f43-85dc-abe52818818b] Running
	I0403 19:33:30.984015   75819 system_pods.go:61] "kube-apiserver-flannel-999005" [3b6f77fb-86b6-4f3a-91d7-ae7b58f084f8] Running
	I0403 19:33:30.984021   75819 system_pods.go:61] "kube-controller-manager-flannel-999005" [344cd255-fe98-41ef-818b-e79c931c72c3] Running
	I0403 19:33:30.984026   75819 system_pods.go:61] "kube-proxy-5wp5x" [e3f733e6-641a-4c29-94e7-a11cca7d4707] Running
	I0403 19:33:30.984035   75819 system_pods.go:61] "kube-scheduler-flannel-999005" [8a6014ba-ea10-4d6e-8e23-708cabaaeac9] Running
	I0403 19:33:30.984040   75819 system_pods.go:61] "storage-provisioner" [6785981d-1626-4f5a-ab63-000a23fcdce1] Running
	I0403 19:33:30.984048   75819 system_pods.go:74] duration metric: took 178.79249ms to wait for pod list to return data ...
	I0403 19:33:30.984056   75819 default_sa.go:34] waiting for default service account to be created ...
	I0403 19:33:31.182732   75819 default_sa.go:45] found service account: "default"
	I0403 19:33:31.182760   75819 default_sa.go:55] duration metric: took 198.696832ms for default service account to be created ...
	I0403 19:33:31.182774   75819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0403 19:33:31.385033   75819 system_pods.go:86] 7 kube-system pods found
	I0403 19:33:31.385057   75819 system_pods.go:89] "coredns-668d6bf9bc-qxf6t" [c2f4058a-3dd8-4489-8fbc-05a2270375e4] Running
	I0403 19:33:31.385062   75819 system_pods.go:89] "etcd-flannel-999005" [67a1995c-eb31-4f43-85dc-abe52818818b] Running
	I0403 19:33:31.385066   75819 system_pods.go:89] "kube-apiserver-flannel-999005" [3b6f77fb-86b6-4f3a-91d7-ae7b58f084f8] Running
	I0403 19:33:31.385069   75819 system_pods.go:89] "kube-controller-manager-flannel-999005" [344cd255-fe98-41ef-818b-e79c931c72c3] Running
	I0403 19:33:31.385073   75819 system_pods.go:89] "kube-proxy-5wp5x" [e3f733e6-641a-4c29-94e7-a11cca7d4707] Running
	I0403 19:33:31.385076   75819 system_pods.go:89] "kube-scheduler-flannel-999005" [8a6014ba-ea10-4d6e-8e23-708cabaaeac9] Running
	I0403 19:33:31.385079   75819 system_pods.go:89] "storage-provisioner" [6785981d-1626-4f5a-ab63-000a23fcdce1] Running
	I0403 19:33:31.385085   75819 system_pods.go:126] duration metric: took 202.306181ms to wait for k8s-apps to be running ...
	I0403 19:33:31.385091   75819 system_svc.go:44] waiting for kubelet service to be running ....
	I0403 19:33:31.385126   75819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:33:31.404702   75819 system_svc.go:56] duration metric: took 19.600688ms WaitForService to wait for kubelet
	I0403 19:33:31.404730   75819 kubeadm.go:582] duration metric: took 21.4295849s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:33:31.404750   75819 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:33:31.582762   75819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:33:31.582801   75819 node_conditions.go:123] node cpu capacity is 2
	I0403 19:33:31.582836   75819 node_conditions.go:105] duration metric: took 178.062088ms to run NodePressure ...
	I0403 19:33:31.582854   75819 start.go:241] waiting for startup goroutines ...
	I0403 19:33:31.582869   75819 start.go:246] waiting for cluster config update ...
	I0403 19:33:31.582887   75819 start.go:255] writing updated cluster config ...
	I0403 19:33:31.583197   75819 ssh_runner.go:195] Run: rm -f paused
	I0403 19:33:31.635619   75819 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0403 19:33:31.638459   75819 out.go:177] * Done! kubectl is now configured to use "flannel-999005" cluster and "default" namespace by default
	I0403 19:33:27.742812   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:27.745608   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:27.745919   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:27.745942   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:27.746168   77599 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0403 19:33:27.751053   77599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:33:27.764022   77599 kubeadm.go:883] updating cluster {Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0403 19:33:27.764144   77599 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 19:33:27.764216   77599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:33:27.796330   77599 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0403 19:33:27.796388   77599 ssh_runner.go:195] Run: which lz4
	I0403 19:33:27.800001   77599 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0403 19:33:27.803844   77599 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0403 19:33:27.803872   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0403 19:33:29.013823   77599 crio.go:462] duration metric: took 1.21384319s to copy over tarball
	I0403 19:33:29.013908   77599 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0403 19:33:31.265429   77599 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25149294s)
	I0403 19:33:31.265456   77599 crio.go:469] duration metric: took 2.251598795s to extract the tarball
	I0403 19:33:31.265466   77599 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0403 19:33:31.311717   77599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:33:31.357972   77599 crio.go:514] all images are preloaded for cri-o runtime.
	I0403 19:33:31.357990   77599 cache_images.go:84] Images are preloaded, skipping loading
	I0403 19:33:31.357996   77599 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.32.2 crio true true} ...
	I0403 19:33:31.358074   77599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-999005 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0403 19:33:31.358151   77599 ssh_runner.go:195] Run: crio config
	I0403 19:33:31.405178   77599 cni.go:84] Creating CNI manager for "bridge"
	I0403 19:33:31.405201   77599 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0403 19:33:31.405225   77599 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-999005 NodeName:bridge-999005 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0403 19:33:31.405365   77599 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-999005"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0403 19:33:31.405440   77599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0403 19:33:31.414987   77599 binaries.go:44] Found k8s binaries, skipping transfer
	I0403 19:33:31.415051   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0403 19:33:31.423910   77599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0403 19:33:31.440728   77599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0403 19:33:31.457926   77599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0403 19:33:31.473099   77599 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0403 19:33:31.476839   77599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:33:31.489178   77599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:31.648751   77599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:33:31.669990   77599 certs.go:68] Setting up /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005 for IP: 192.168.39.185
	I0403 19:33:31.670005   77599 certs.go:194] generating shared ca certs ...
	I0403 19:33:31.670019   77599 certs.go:226] acquiring lock for ca certs: {Name:mk34f09724355b828d31eb1ee97771fd3b4645c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:31.670173   77599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key
	I0403 19:33:31.670222   77599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key
	I0403 19:33:31.670233   77599 certs.go:256] generating profile certs ...
	I0403 19:33:31.670294   77599 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.key
	I0403 19:33:31.670311   77599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt with IP's: []
	I0403 19:33:31.786831   77599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt ...
	I0403 19:33:31.786859   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: {Name:mkf649d0c8846125bd9d91dd0614dd3edfd43b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:31.787055   77599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.key ...
	I0403 19:33:31.787070   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.key: {Name:mkea47be4f98d7242ecb2031208f90bf3ddcfbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:31.787180   77599 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7
	I0403 19:33:31.787196   77599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185]
	I0403 19:33:32.247425   77599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7 ...
	I0403 19:33:32.247474   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7: {Name:mkb6bfa4c7f67a4ee70ff58016a1c305b43c986d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.247650   77599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7 ...
	I0403 19:33:32.247672   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7: {Name:mk32e06deb5b5d3858815a6cc3fd3d129517ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.247754   77599 certs.go:381] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7 -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt
	I0403 19:33:32.247827   77599 certs.go:385] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7 -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key
	I0403 19:33:32.247877   77599 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key
	I0403 19:33:32.247891   77599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt with IP's: []
	I0403 19:33:32.541993   77599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt ...
	I0403 19:33:32.542032   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt: {Name:mka4e60c00e3edab5ba1c58c999a89035bcada4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.542254   77599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key ...
	I0403 19:33:32.542274   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key: {Name:mkde5f934453d4d4ad6f3ee32b9cd909c8295965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.542504   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem (1338 bytes)
	W0403 19:33:32.542553   77599 certs.go:480] ignoring /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552_empty.pem, impossibly tiny 0 bytes
	I0403 19:33:32.542568   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem (1675 bytes)
	I0403 19:33:32.542598   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem (1082 bytes)
	I0403 19:33:32.542631   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem (1123 bytes)
	I0403 19:33:32.542662   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem (1675 bytes)
	I0403 19:33:32.542713   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:33:32.543437   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0403 19:33:32.573758   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0403 19:33:32.607840   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0403 19:33:32.640302   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0403 19:33:32.664859   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0403 19:33:32.688081   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0403 19:33:32.713262   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0403 19:33:32.738235   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0403 19:33:32.760858   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /usr/share/ca-certificates/215522.pem (1708 bytes)
	I0403 19:33:32.785677   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0403 19:33:32.812357   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem --> /usr/share/ca-certificates/21552.pem (1338 bytes)
	I0403 19:33:32.837494   77599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0403 19:33:32.855867   77599 ssh_runner.go:195] Run: openssl version
	I0403 19:33:32.861693   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215522.pem && ln -fs /usr/share/ca-certificates/215522.pem /etc/ssl/certs/215522.pem"
	I0403 19:33:32.873958   77599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215522.pem
	I0403 19:33:32.878670   77599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  3 18:20 /usr/share/ca-certificates/215522.pem
	I0403 19:33:32.878720   77599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215522.pem
	I0403 19:33:32.884412   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215522.pem /etc/ssl/certs/3ec20f2e.0"
	I0403 19:33:32.895046   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0403 19:33:32.907127   77599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:33:32.911596   77599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  3 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:33:32.911653   77599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:33:32.917387   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0403 19:33:32.929021   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21552.pem && ln -fs /usr/share/ca-certificates/21552.pem /etc/ssl/certs/21552.pem"
	I0403 19:33:32.939538   77599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21552.pem
	I0403 19:33:32.943923   77599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  3 18:20 /usr/share/ca-certificates/21552.pem
	I0403 19:33:32.944004   77599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21552.pem
	I0403 19:33:32.949423   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21552.pem /etc/ssl/certs/51391683.0"
	I0403 19:33:32.960722   77599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0403 19:33:32.965345   77599 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0403 19:33:32.965401   77599 kubeadm.go:392] StartCluster: {Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:33:32.965483   77599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0403 19:33:32.965542   77599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0403 19:33:33.006784   77599 cri.go:89] found id: ""
	I0403 19:33:33.006867   77599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0403 19:33:33.020183   77599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0403 19:33:33.032692   77599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:33:33.044354   77599 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 19:33:33.044374   77599 kubeadm.go:157] found existing configuration files:
	
	I0403 19:33:33.044424   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:33:33.054955   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 19:33:33.055012   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 19:33:33.065535   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:33:33.075309   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 19:33:33.075362   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 19:33:33.084429   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:33:33.094442   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 19:33:33.094494   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:33:33.104926   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:33:33.113846   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 19:33:33.113901   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:33:33.123447   77599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0403 19:33:33.175768   77599 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0403 19:33:33.175858   77599 kubeadm.go:310] [preflight] Running pre-flight checks
	I0403 19:33:33.283828   77599 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0403 19:33:33.283918   77599 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0403 19:33:33.284054   77599 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0403 19:33:33.292775   77599 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0403 19:33:33.394356   77599 out.go:235]   - Generating certificates and keys ...
	I0403 19:33:33.394483   77599 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0403 19:33:33.394561   77599 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0403 19:33:33.485736   77599 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0403 19:33:33.658670   77599 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0403 19:33:33.890328   77599 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0403 19:33:34.033068   77599 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0403 19:33:34.206188   77599 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0403 19:33:34.206439   77599 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-999005 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0403 19:33:34.284743   77599 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0403 19:33:34.285173   77599 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-999005 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0403 19:33:34.392026   77599 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0403 19:33:34.810433   77599 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0403 19:33:35.031395   77599 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0403 19:33:35.031595   77599 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0403 19:33:35.090736   77599 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0403 19:33:35.311577   77599 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0403 19:33:35.707554   77599 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0403 19:33:35.820376   77599 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0403 19:33:35.956268   77599 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0403 19:33:35.956874   77599 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0403 19:33:35.959282   77599 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0403 19:33:35.961148   77599 out.go:235]   - Booting up control plane ...
	I0403 19:33:35.961289   77599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0403 19:33:35.961399   77599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0403 19:33:35.961510   77599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0403 19:33:35.976979   77599 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0403 19:33:35.984810   77599 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0403 19:33:35.984907   77599 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0403 19:33:36.127595   77599 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0403 19:33:36.127753   77599 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0403 19:33:37.628536   77599 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502119988s
	I0403 19:33:37.628648   77599 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0403 19:33:42.629743   77599 kubeadm.go:310] [api-check] The API server is healthy after 5.001769611s
	I0403 19:33:42.644211   77599 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0403 19:33:42.657726   77599 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0403 19:33:42.676447   77599 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0403 19:33:42.676702   77599 kubeadm.go:310] [mark-control-plane] Marking the node bridge-999005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0403 19:33:42.687306   77599 kubeadm.go:310] [bootstrap-token] Using token: fq7src.0us7ohixvgrd79kz
	I0403 19:33:42.688455   77599 out.go:235]   - Configuring RBAC rules ...
	I0403 19:33:42.688598   77599 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0403 19:33:42.699921   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0403 19:33:42.705060   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0403 19:33:42.708286   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0403 19:33:42.711842   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0403 19:33:42.714732   77599 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0403 19:33:43.034566   77599 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0403 19:33:43.461914   77599 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0403 19:33:44.038634   77599 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0403 19:33:44.038659   77599 kubeadm.go:310] 
	I0403 19:33:44.038745   77599 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0403 19:33:44.038755   77599 kubeadm.go:310] 
	I0403 19:33:44.038871   77599 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0403 19:33:44.038881   77599 kubeadm.go:310] 
	I0403 19:33:44.038916   77599 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0403 19:33:44.039008   77599 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0403 19:33:44.039100   77599 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0403 19:33:44.039134   77599 kubeadm.go:310] 
	I0403 19:33:44.039222   77599 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0403 19:33:44.039235   77599 kubeadm.go:310] 
	I0403 19:33:44.039297   77599 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0403 19:33:44.039307   77599 kubeadm.go:310] 
	I0403 19:33:44.039378   77599 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0403 19:33:44.039475   77599 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0403 19:33:44.039566   77599 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0403 19:33:44.039577   77599 kubeadm.go:310] 
	I0403 19:33:44.039690   77599 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0403 19:33:44.039800   77599 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0403 19:33:44.039812   77599 kubeadm.go:310] 
	I0403 19:33:44.039932   77599 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fq7src.0us7ohixvgrd79kz \
	I0403 19:33:44.040071   77599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 \
	I0403 19:33:44.040122   77599 kubeadm.go:310] 	--control-plane 
	I0403 19:33:44.040136   77599 kubeadm.go:310] 
	I0403 19:33:44.040260   77599 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0403 19:33:44.040279   77599 kubeadm.go:310] 
	I0403 19:33:44.040382   77599 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fq7src.0us7ohixvgrd79kz \
	I0403 19:33:44.040526   77599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 
	I0403 19:33:44.042310   77599 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:33:44.042339   77599 cni.go:84] Creating CNI manager for "bridge"
	I0403 19:33:44.044752   77599 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0403 19:33:44.046058   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0403 19:33:44.056620   77599 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0403 19:33:44.072775   77599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0403 19:33:44.072865   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:44.072907   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-999005 minikube.k8s.io/updated_at=2025_04_03T19_33_44_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053 minikube.k8s.io/name=bridge-999005 minikube.k8s.io/primary=true
	I0403 19:33:44.091241   77599 ops.go:34] apiserver oom_adj: -16
	I0403 19:33:44.213492   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:44.713802   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:45.214487   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:45.714490   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:46.213775   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:46.714137   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:47.214234   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:47.714484   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:48.214082   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:48.316673   77599 kubeadm.go:1113] duration metric: took 4.243867048s to wait for elevateKubeSystemPrivileges
	I0403 19:33:48.316706   77599 kubeadm.go:394] duration metric: took 15.351310395s to StartCluster
	I0403 19:33:48.316727   77599 settings.go:142] acquiring lock: {Name:mk92384ef10350a2ea8b1710e0b74c72a0214398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:48.316801   77599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:33:48.317861   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/kubeconfig: {Name:mk73cf36c30cb8628f68d205e8cb0818a9975b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:48.318088   77599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0403 19:33:48.318097   77599 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:33:48.318175   77599 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0403 19:33:48.318244   77599 addons.go:69] Setting storage-provisioner=true in profile "bridge-999005"
	I0403 19:33:48.318265   77599 addons.go:238] Setting addon storage-provisioner=true in "bridge-999005"
	I0403 19:33:48.318297   77599 host.go:66] Checking if "bridge-999005" exists ...
	I0403 19:33:48.318313   77599 addons.go:69] Setting default-storageclass=true in profile "bridge-999005"
	I0403 19:33:48.318298   77599 config.go:182] Loaded profile config "bridge-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:48.318356   77599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-999005"
	I0403 19:33:48.318770   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.318796   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.318776   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.318879   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.319539   77599 out.go:177] * Verifying Kubernetes components...
	I0403 19:33:48.321103   77599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:48.336019   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0403 19:33:48.336019   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0403 19:33:48.336447   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.336540   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.336979   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.336996   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.337098   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.337121   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.337332   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.337465   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.337538   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:48.338013   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.338065   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.340961   77599 addons.go:238] Setting addon default-storageclass=true in "bridge-999005"
	I0403 19:33:48.340999   77599 host.go:66] Checking if "bridge-999005" exists ...
	I0403 19:33:48.341322   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.341365   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.355048   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39811
	I0403 19:33:48.355610   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.356196   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.356226   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.356592   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.356792   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:48.356827   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0403 19:33:48.357305   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.357816   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.357835   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.358248   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.358722   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:48.358870   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.358911   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.360538   77599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:33:48.361702   77599 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:48.361718   77599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0403 19:33:48.361733   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:48.365062   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.365531   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:48.365554   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.365701   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:48.365870   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:48.366032   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:48.366166   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:48.374675   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I0403 19:33:48.375202   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.375806   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.375835   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.376141   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.376322   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:48.378097   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:48.378291   77599 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:48.378302   77599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0403 19:33:48.378314   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:48.381118   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.381622   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:48.381645   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.381846   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:48.382025   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:48.382166   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:48.382292   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:48.586906   77599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:33:48.586933   77599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0403 19:33:48.720936   77599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:48.723342   77599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:49.076492   77599 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0403 19:33:49.076540   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.076560   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.076816   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.076831   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.076840   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.076848   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.077211   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.077226   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.077254   77599 main.go:141] libmachine: (bridge-999005) DBG | Closing plugin on server side
	I0403 19:33:49.077567   77599 node_ready.go:35] waiting up to 15m0s for node "bridge-999005" to be "Ready" ...
	I0403 19:33:49.095818   77599 node_ready.go:49] node "bridge-999005" has status "Ready":"True"
	I0403 19:33:49.095840   77599 node_ready.go:38] duration metric: took 18.234764ms for node "bridge-999005" to be "Ready" ...
	I0403 19:33:49.095851   77599 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:49.103291   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.103309   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.103560   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.103582   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.103585   77599 main.go:141] libmachine: (bridge-999005) DBG | Closing plugin on server side
	I0403 19:33:49.106640   77599 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:49.381709   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.381734   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.382012   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.382029   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.382037   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.382044   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.382304   77599 main.go:141] libmachine: (bridge-999005) DBG | Closing plugin on server side
	I0403 19:33:49.382308   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.382332   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.383772   77599 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0403 19:33:49.384901   77599 addons.go:514] duration metric: took 1.066742014s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0403 19:33:49.580077   77599 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-999005" context rescaled to 1 replicas
	I0403 19:33:51.111757   77599 pod_ready.go:103] pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:52.112437   77599 pod_ready.go:93] pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:52.112460   77599 pod_ready.go:82] duration metric: took 3.005799611s for pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:52.112469   77599 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:52.114218   77599 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-s979x" not found
	I0403 19:33:52.114244   77599 pod_ready.go:82] duration metric: took 1.768553ms for pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace to be "Ready" ...
	E0403 19:33:52.114257   77599 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-s979x" not found
	I0403 19:33:52.114267   77599 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:53.332014   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:33:53.332308   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:33:53.332328   66718 kubeadm.go:310] 
	I0403 19:33:53.332364   66718 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0403 19:33:53.332399   66718 kubeadm.go:310] 		timed out waiting for the condition
	I0403 19:33:53.332406   66718 kubeadm.go:310] 
	I0403 19:33:53.332435   66718 kubeadm.go:310] 	This error is likely caused by:
	I0403 19:33:53.332465   66718 kubeadm.go:310] 		- The kubelet is not running
	I0403 19:33:53.332560   66718 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0403 19:33:53.332566   66718 kubeadm.go:310] 
	I0403 19:33:53.332655   66718 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0403 19:33:53.332718   66718 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0403 19:33:53.332781   66718 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0403 19:33:53.332790   66718 kubeadm.go:310] 
	I0403 19:33:53.332922   66718 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0403 19:33:53.333025   66718 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0403 19:33:53.333033   66718 kubeadm.go:310] 
	I0403 19:33:53.333168   66718 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0403 19:33:53.333296   66718 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0403 19:33:53.333410   66718 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0403 19:33:53.333518   66718 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0403 19:33:53.333528   66718 kubeadm.go:310] 
	I0403 19:33:53.334367   66718 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:33:53.334492   66718 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0403 19:33:53.334554   66718 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0403 19:33:53.334604   66718 kubeadm.go:394] duration metric: took 7m59.310981648s to StartCluster
	I0403 19:33:53.334636   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:33:53.334685   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:33:53.373643   66718 cri.go:89] found id: ""
	I0403 19:33:53.373669   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.373682   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:33:53.373689   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:33:53.373736   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:33:53.403561   66718 cri.go:89] found id: ""
	I0403 19:33:53.403587   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.403595   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:33:53.403600   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:33:53.403655   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:33:53.433381   66718 cri.go:89] found id: ""
	I0403 19:33:53.433411   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.433420   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:33:53.433427   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:33:53.433480   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:33:53.464729   66718 cri.go:89] found id: ""
	I0403 19:33:53.464758   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.464769   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:33:53.464775   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:33:53.464843   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:33:53.495666   66718 cri.go:89] found id: ""
	I0403 19:33:53.495697   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.495708   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:33:53.495715   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:33:53.495782   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:33:53.527704   66718 cri.go:89] found id: ""
	I0403 19:33:53.527730   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.527739   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:33:53.527747   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:33:53.527804   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:33:53.567852   66718 cri.go:89] found id: ""
	I0403 19:33:53.567874   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.567881   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:33:53.567887   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:33:53.567943   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:33:53.597334   66718 cri.go:89] found id: ""
	I0403 19:33:53.597363   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.597374   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:33:53.597386   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:33:53.597399   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:33:53.653211   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:33:53.653246   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:33:53.666175   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:33:53.666201   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:33:53.736375   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:33:53.736397   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:33:53.736409   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:33:53.837412   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:33:53.837449   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0403 19:33:53.876433   66718 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0403 19:33:53.876481   66718 out.go:270] * 
	W0403 19:33:53.876533   66718 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:33:53.876547   66718 out.go:270] * 
	W0403 19:33:53.877616   66718 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0403 19:33:53.880186   66718 out.go:201] 
	W0403 19:33:53.881256   66718 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:33:53.881290   66718 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0403 19:33:53.881311   66718 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0403 19:33:53.882318   66718 out.go:201] 
	I0403 19:33:54.120332   77599 pod_ready.go:103] pod "etcd-bridge-999005" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:56.122064   77599 pod_ready.go:103] pod "etcd-bridge-999005" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:58.119737   77599 pod_ready.go:93] pod "etcd-bridge-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:58.119764   77599 pod_ready.go:82] duration metric: took 6.005488859s for pod "etcd-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.119775   77599 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.123208   77599 pod_ready.go:93] pod "kube-apiserver-bridge-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:58.123232   77599 pod_ready.go:82] duration metric: took 3.448838ms for pod "kube-apiserver-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.123245   77599 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.126391   77599 pod_ready.go:93] pod "kube-controller-manager-bridge-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:58.126411   77599 pod_ready.go:82] duration metric: took 3.157876ms for pod "kube-controller-manager-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.126422   77599 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-kp7mg" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.129660   77599 pod_ready.go:93] pod "kube-proxy-kp7mg" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:58.129677   77599 pod_ready.go:82] duration metric: took 3.247584ms for pod "kube-proxy-kp7mg" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.129688   77599 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.133889   77599 pod_ready.go:93] pod "kube-scheduler-bridge-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:58.133911   77599 pod_ready.go:82] duration metric: took 4.215142ms for pod "kube-scheduler-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.133921   77599 pod_ready.go:39] duration metric: took 9.038057268s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:58.133939   77599 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:33:58.133987   77599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:33:58.148976   77599 api_server.go:72] duration metric: took 9.830850735s to wait for apiserver process to appear ...
	I0403 19:33:58.149002   77599 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:33:58.149021   77599 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0403 19:33:58.152765   77599 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0403 19:33:58.153801   77599 api_server.go:141] control plane version: v1.32.2
	I0403 19:33:58.153825   77599 api_server.go:131] duration metric: took 4.814693ms to wait for apiserver health ...
	I0403 19:33:58.153833   77599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:33:58.318924   77599 system_pods.go:59] 7 kube-system pods found
	I0403 19:33:58.318971   77599 system_pods.go:61] "coredns-668d6bf9bc-d2sp8" [22f55c40-046d-4876-870a-29a97951f661] Running
	I0403 19:33:58.318980   77599 system_pods.go:61] "etcd-bridge-999005" [10bef341-2f47-418c-93ed-0e09236c9fb8] Running
	I0403 19:33:58.318986   77599 system_pods.go:61] "kube-apiserver-bridge-999005" [c0986f2f-42ad-4c25-bcfc-306c002e19a1] Running
	I0403 19:33:58.318992   77599 system_pods.go:61] "kube-controller-manager-bridge-999005" [d202b4b9-ea4e-4685-9f14-81f090e0d7d7] Running
	I0403 19:33:58.319003   77599 system_pods.go:61] "kube-proxy-kp7mg" [2b5f323f-0954-4bf4-8fde-0574c17c9e0b] Running
	I0403 19:33:58.319008   77599 system_pods.go:61] "kube-scheduler-bridge-999005" [2dc43204-833f-4d34-bd9d-20426247559e] Running
	I0403 19:33:58.319015   77599 system_pods.go:61] "storage-provisioner" [0e4050fe-17bb-4246-a551-61dcdd16389c] Running
	I0403 19:33:58.319023   77599 system_pods.go:74] duration metric: took 165.18288ms to wait for pod list to return data ...
	I0403 19:33:58.319034   77599 default_sa.go:34] waiting for default service account to be created ...
	I0403 19:33:58.517854   77599 default_sa.go:45] found service account: "default"
	I0403 19:33:58.517883   77599 default_sa.go:55] duration metric: took 198.841522ms for default service account to be created ...
	I0403 19:33:58.517895   77599 system_pods.go:116] waiting for k8s-apps to be running ...
	I0403 19:33:58.720732   77599 system_pods.go:86] 7 kube-system pods found
	I0403 19:33:58.720761   77599 system_pods.go:89] "coredns-668d6bf9bc-d2sp8" [22f55c40-046d-4876-870a-29a97951f661] Running
	I0403 19:33:58.720769   77599 system_pods.go:89] "etcd-bridge-999005" [10bef341-2f47-418c-93ed-0e09236c9fb8] Running
	I0403 19:33:58.720775   77599 system_pods.go:89] "kube-apiserver-bridge-999005" [c0986f2f-42ad-4c25-bcfc-306c002e19a1] Running
	I0403 19:33:58.720780   77599 system_pods.go:89] "kube-controller-manager-bridge-999005" [d202b4b9-ea4e-4685-9f14-81f090e0d7d7] Running
	I0403 19:33:58.720785   77599 system_pods.go:89] "kube-proxy-kp7mg" [2b5f323f-0954-4bf4-8fde-0574c17c9e0b] Running
	I0403 19:33:58.720789   77599 system_pods.go:89] "kube-scheduler-bridge-999005" [2dc43204-833f-4d34-bd9d-20426247559e] Running
	I0403 19:33:58.720794   77599 system_pods.go:89] "storage-provisioner" [0e4050fe-17bb-4246-a551-61dcdd16389c] Running
	I0403 19:33:58.720803   77599 system_pods.go:126] duration metric: took 202.901205ms to wait for k8s-apps to be running ...
	I0403 19:33:58.720811   77599 system_svc.go:44] waiting for kubelet service to be running ....
	I0403 19:33:58.720857   77599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:33:58.736498   77599 system_svc.go:56] duration metric: took 15.680603ms WaitForService to wait for kubelet
	I0403 19:33:58.736522   77599 kubeadm.go:582] duration metric: took 10.418400754s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:33:58.736539   77599 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:33:58.918022   77599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:33:58.918053   77599 node_conditions.go:123] node cpu capacity is 2
	I0403 19:33:58.918067   77599 node_conditions.go:105] duration metric: took 181.522606ms to run NodePressure ...
	I0403 19:33:58.918081   77599 start.go:241] waiting for startup goroutines ...
	I0403 19:33:58.918091   77599 start.go:246] waiting for cluster config update ...
	I0403 19:33:58.918111   77599 start.go:255] writing updated cluster config ...
	I0403 19:33:58.918438   77599 ssh_runner.go:195] Run: rm -f paused
	I0403 19:33:58.966577   77599 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0403 19:33:58.968374   77599 out.go:177] * Done! kubectl is now configured to use "bridge-999005" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.234218332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743709376234197315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0761494-bd3e-4756-add3-1c6258ac5236 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.234675042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d323fcfc-9717-4a98-af0a-e7dfcc47a5ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.234736802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d323fcfc-9717-4a98-af0a-e7dfcc47a5ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.234772302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d323fcfc-9717-4a98-af0a-e7dfcc47a5ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.264425138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8cd771b7-8819-4114-a00c-551e81502af5 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.264512624Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8cd771b7-8819-4114-a00c-551e81502af5 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.265658544Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6614a02d-d50c-43b0-8f08-d1fbfd21b52f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.266085003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743709376266063674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6614a02d-d50c-43b0-8f08-d1fbfd21b52f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.266637035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfce5990-0c28-4f5d-aed1-8248400839e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.266700915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfce5990-0c28-4f5d-aed1-8248400839e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.266736770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dfce5990-0c28-4f5d-aed1-8248400839e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.295610953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=874a732a-77fa-49a2-b238-27e35a9b1b02 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.295713250Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=874a732a-77fa-49a2-b238-27e35a9b1b02 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.297194540Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf1b45f6-b244-4d58-8bdf-41b4f2e53b49 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.297581286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743709376297557103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf1b45f6-b244-4d58-8bdf-41b4f2e53b49 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.298060523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35400af6-e2d8-47d5-b5e1-1fa37bcca2d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.298113140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35400af6-e2d8-47d5-b5e1-1fa37bcca2d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.298150258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=35400af6-e2d8-47d5-b5e1-1fa37bcca2d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.326086141Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca024335-15f4-43f3-90c5-350e6553f2b1 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.326157525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca024335-15f4-43f3-90c5-350e6553f2b1 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.327245138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49c61ed8-154b-45fc-ad79-953ae8041a29 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.327623942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743709376327602775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49c61ed8-154b-45fc-ad79-953ae8041a29 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.328236473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57b5b71e-f08e-474b-b0a4-93b006abeebd name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.328300690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57b5b71e-f08e-474b-b0a4-93b006abeebd name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:42:56 old-k8s-version-471019 crio[636]: time="2025-04-03 19:42:56.328345562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=57b5b71e-f08e-474b-b0a4-93b006abeebd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 3 19:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052726] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041853] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.065841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.955511] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.571384] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.620728] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.063202] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054417] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.185024] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.123908] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.218372] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +6.279584] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.069499] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.643502] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[Apr 3 19:26] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 3 19:30] systemd-fstab-generator[5045]: Ignoring "noauto" option for root device
	[Apr 3 19:31] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.102429] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:42:56 up 17 min,  0 users,  load average: 0.09, 0.04, 0.01
	Linux old-k8s-version-471019 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000d666f0)
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000e23ef0, 0x4f0ac20, 0xc000e5a3c0, 0x1, 0xc0001020c0)
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0008c00e0, 0xc0001020c0)
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c12280, 0xc000c22460)
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 03 19:42:54 old-k8s-version-471019 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 03 19:42:54 old-k8s-version-471019 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 03 19:42:54 old-k8s-version-471019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 03 19:42:55 old-k8s-version-471019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 03 19:42:55 old-k8s-version-471019 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 03 19:42:55 old-k8s-version-471019 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 03 19:42:55 old-k8s-version-471019 kubelet[6512]: I0403 19:42:55.442476    6512 server.go:416] Version: v1.20.0
	Apr 03 19:42:55 old-k8s-version-471019 kubelet[6512]: I0403 19:42:55.442771    6512 server.go:837] Client rotation is on, will bootstrap in background
	Apr 03 19:42:55 old-k8s-version-471019 kubelet[6512]: I0403 19:42:55.444947    6512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 03 19:42:55 old-k8s-version-471019 kubelet[6512]: W0403 19:42:55.445961    6512 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 03 19:42:55 old-k8s-version-471019 kubelet[6512]: I0403 19:42:55.446109    6512 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-471019 -n old-k8s-version-471019
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 2 (218.322695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-471019" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (358.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:43:02.548707   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:43:12.665438   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:43:31.662144   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:43:40.368311   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:43:59.364283   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:43:59.397724   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:44:09.255547   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:44:27.099445   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:44:34.400998   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:45:54.740622   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/auto-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:46:04.511611   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/kindnet-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:46:34.284812   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/no-preload-409395/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:46:47.074568   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/calico-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:47:34.849468   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/custom-flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:47:37.480681   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:47:53.688631   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/default-k8s-diff-port-875122/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:47:57.350087   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/no-preload-409395/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:48:12.666420   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/enable-default-cni-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
E0403 19:48:31.662600   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/flannel-999005/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.209:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.209:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-471019 -n old-k8s-version-471019
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 2 (221.738626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-471019" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-471019 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-471019 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.333µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-471019 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 2 (213.026571ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-471019 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-999005 sudo iptables                       | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo docker                         | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo cat                            | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo                                | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo find                           | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-999005 sudo crio                           | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-999005                                     | bridge-999005 | jenkins | v1.35.0 | 03 Apr 25 19:34 UTC | 03 Apr 25 19:34 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 19:33:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 19:33:02.376869   77599 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:33:02.377092   77599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:33:02.377103   77599 out.go:358] Setting ErrFile to fd 2...
	I0403 19:33:02.377107   77599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:33:02.377328   77599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:33:02.378024   77599 out.go:352] Setting JSON to false
	I0403 19:33:02.379161   77599 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8127,"bootTime":1743700655,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:33:02.379239   77599 start.go:139] virtualization: kvm guest
	I0403 19:33:02.380689   77599 out.go:177] * [bridge-999005] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:33:02.382009   77599 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:33:02.382020   77599 notify.go:220] Checking for updates...
	I0403 19:33:02.384007   77599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:33:02.385169   77599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:33:02.386247   77599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:33:02.387253   77599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:33:02.388401   77599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:33:02.389846   77599 config.go:182] Loaded profile config "enable-default-cni-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:02.389947   77599 config.go:182] Loaded profile config "flannel-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:02.390028   77599 config.go:182] Loaded profile config "old-k8s-version-471019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0403 19:33:02.390112   77599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:33:02.427821   77599 out.go:177] * Using the kvm2 driver based on user configuration
	I0403 19:33:02.428964   77599 start.go:297] selected driver: kvm2
	I0403 19:33:02.428982   77599 start.go:901] validating driver "kvm2" against <nil>
	I0403 19:33:02.428993   77599 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:33:02.429643   77599 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:33:02.429716   77599 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 19:33:02.446281   77599 install.go:137] /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2 version is 1.35.0
	I0403 19:33:02.446337   77599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0403 19:33:02.446713   77599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:33:02.446763   77599 cni.go:84] Creating CNI manager for "bridge"
	I0403 19:33:02.446772   77599 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0403 19:33:02.446854   77599 start.go:340] cluster config:
	{Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:33:02.446984   77599 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 19:33:02.448554   77599 out.go:177] * Starting "bridge-999005" primary control-plane node in "bridge-999005" cluster
	I0403 19:33:02.449622   77599 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 19:33:02.449667   77599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0403 19:33:02.449684   77599 cache.go:56] Caching tarball of preloaded images
	I0403 19:33:02.449752   77599 preload.go:172] Found /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0403 19:33:02.449762   77599 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0403 19:33:02.449877   77599 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/config.json ...
	I0403 19:33:02.449899   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/config.json: {Name:mk2379bf0104743094b5c7dde2a4c0ad0c4e9cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:02.450060   77599 start.go:360] acquireMachinesLock for bridge-999005: {Name:mk8972215f0ab94ca7966bf5adf18262e19bccd0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0403 19:33:02.450095   77599 start.go:364] duration metric: took 19.647µs to acquireMachinesLock for "bridge-999005"
	I0403 19:33:02.450116   77599 start.go:93] Provisioning new machine with config: &{Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-999005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:33:02.450176   77599 start.go:125] createHost starting for "" (driver="kvm2")
	I0403 19:32:59.278761   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:01.780343   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:04.220006   75819 kubeadm.go:310] [api-check] The API server is healthy after 5.502485937s
	I0403 19:33:04.234838   75819 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0403 19:33:04.249952   75819 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0403 19:33:04.281515   75819 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0403 19:33:04.281698   75819 kubeadm.go:310] [mark-control-plane] Marking the node flannel-999005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0403 19:33:04.292849   75819 kubeadm.go:310] [bootstrap-token] Using token: i2opuv.2m47nf28qphn3gfh
	I0403 19:33:04.294088   75819 out.go:235]   - Configuring RBAC rules ...
	I0403 19:33:04.294250   75819 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0403 19:33:04.298299   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0403 19:33:04.306023   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0403 19:33:04.310195   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0403 19:33:04.316560   75819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0403 19:33:04.323542   75819 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0403 19:33:04.627539   75819 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0403 19:33:05.059216   75819 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0403 19:33:05.626881   75819 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0403 19:33:05.627833   75819 kubeadm.go:310] 
	I0403 19:33:05.627927   75819 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0403 19:33:05.627938   75819 kubeadm.go:310] 
	I0403 19:33:05.628077   75819 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0403 19:33:05.628099   75819 kubeadm.go:310] 
	I0403 19:33:05.628132   75819 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0403 19:33:05.628211   75819 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0403 19:33:05.628291   75819 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0403 19:33:05.628302   75819 kubeadm.go:310] 
	I0403 19:33:05.628386   75819 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0403 19:33:05.628396   75819 kubeadm.go:310] 
	I0403 19:33:05.628464   75819 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0403 19:33:05.628473   75819 kubeadm.go:310] 
	I0403 19:33:05.628539   75819 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0403 19:33:05.628647   75819 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0403 19:33:05.628774   75819 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0403 19:33:05.628800   75819 kubeadm.go:310] 
	I0403 19:33:05.628905   75819 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0403 19:33:05.629014   75819 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0403 19:33:05.629022   75819 kubeadm.go:310] 
	I0403 19:33:05.629117   75819 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i2opuv.2m47nf28qphn3gfh \
	I0403 19:33:05.629239   75819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 \
	I0403 19:33:05.629267   75819 kubeadm.go:310] 	--control-plane 
	I0403 19:33:05.629275   75819 kubeadm.go:310] 
	I0403 19:33:05.629382   75819 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0403 19:33:05.629391   75819 kubeadm.go:310] 
	I0403 19:33:05.629494   75819 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i2opuv.2m47nf28qphn3gfh \
	I0403 19:33:05.629630   75819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 
	I0403 19:33:05.630329   75819 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:33:05.630359   75819 cni.go:84] Creating CNI manager for "flannel"
	I0403 19:33:05.631659   75819 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0403 19:33:02.451472   77599 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0403 19:33:02.451609   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:02.451664   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:02.466285   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
	I0403 19:33:02.466761   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:02.467372   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:02.467391   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:02.467816   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:02.468014   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:02.468179   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:02.468339   77599 start.go:159] libmachine.API.Create for "bridge-999005" (driver="kvm2")
	I0403 19:33:02.468372   77599 client.go:168] LocalClient.Create starting
	I0403 19:33:02.468415   77599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem
	I0403 19:33:02.468455   77599 main.go:141] libmachine: Decoding PEM data...
	I0403 19:33:02.468481   77599 main.go:141] libmachine: Parsing certificate...
	I0403 19:33:02.468554   77599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem
	I0403 19:33:02.468582   77599 main.go:141] libmachine: Decoding PEM data...
	I0403 19:33:02.468601   77599 main.go:141] libmachine: Parsing certificate...
	I0403 19:33:02.468620   77599 main.go:141] libmachine: Running pre-create checks...
	I0403 19:33:02.468639   77599 main.go:141] libmachine: (bridge-999005) Calling .PreCreateCheck
	I0403 19:33:02.468953   77599 main.go:141] libmachine: (bridge-999005) Calling .GetConfigRaw
	I0403 19:33:02.469335   77599 main.go:141] libmachine: Creating machine...
	I0403 19:33:02.469347   77599 main.go:141] libmachine: (bridge-999005) Calling .Create
	I0403 19:33:02.469470   77599 main.go:141] libmachine: (bridge-999005) creating KVM machine...
	I0403 19:33:02.469485   77599 main.go:141] libmachine: (bridge-999005) creating network...
	I0403 19:33:02.470738   77599 main.go:141] libmachine: (bridge-999005) DBG | found existing default KVM network
	I0403 19:33:02.472415   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:02.472249   77621 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123820}
	I0403 19:33:02.472448   77599 main.go:141] libmachine: (bridge-999005) DBG | created network xml: 
	I0403 19:33:02.472470   77599 main.go:141] libmachine: (bridge-999005) DBG | <network>
	I0403 19:33:02.472483   77599 main.go:141] libmachine: (bridge-999005) DBG |   <name>mk-bridge-999005</name>
	I0403 19:33:02.472494   77599 main.go:141] libmachine: (bridge-999005) DBG |   <dns enable='no'/>
	I0403 19:33:02.472504   77599 main.go:141] libmachine: (bridge-999005) DBG |   
	I0403 19:33:02.472515   77599 main.go:141] libmachine: (bridge-999005) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0403 19:33:02.472526   77599 main.go:141] libmachine: (bridge-999005) DBG |     <dhcp>
	I0403 19:33:02.472534   77599 main.go:141] libmachine: (bridge-999005) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0403 19:33:02.472550   77599 main.go:141] libmachine: (bridge-999005) DBG |     </dhcp>
	I0403 19:33:02.472564   77599 main.go:141] libmachine: (bridge-999005) DBG |   </ip>
	I0403 19:33:02.472577   77599 main.go:141] libmachine: (bridge-999005) DBG |   
	I0403 19:33:02.472586   77599 main.go:141] libmachine: (bridge-999005) DBG | </network>
	I0403 19:33:02.472596   77599 main.go:141] libmachine: (bridge-999005) DBG | 
	I0403 19:33:02.477381   77599 main.go:141] libmachine: (bridge-999005) DBG | trying to create private KVM network mk-bridge-999005 192.168.39.0/24...
	I0403 19:33:02.549445   77599 main.go:141] libmachine: (bridge-999005) setting up store path in /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005 ...
	I0403 19:33:02.549483   77599 main.go:141] libmachine: (bridge-999005) DBG | private KVM network mk-bridge-999005 192.168.39.0/24 created
	I0403 19:33:02.549497   77599 main.go:141] libmachine: (bridge-999005) building disk image from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0403 19:33:02.549523   77599 main.go:141] libmachine: (bridge-999005) Downloading /home/jenkins/minikube-integration/20591-14371/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0403 19:33:02.549542   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:02.549359   77621 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:33:02.808436   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:02.808274   77621 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa...
	I0403 19:33:03.010631   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:03.010517   77621 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/bridge-999005.rawdisk...
	I0403 19:33:03.010661   77599 main.go:141] libmachine: (bridge-999005) DBG | Writing magic tar header
	I0403 19:33:03.010671   77599 main.go:141] libmachine: (bridge-999005) DBG | Writing SSH key tar header
	I0403 19:33:03.010768   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:03.010673   77621 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005 ...
	I0403 19:33:03.010855   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005
	I0403 19:33:03.010883   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005 (perms=drwx------)
	I0403 19:33:03.010899   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube/machines (perms=drwxr-xr-x)
	I0403 19:33:03.010913   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube/machines
	I0403 19:33:03.010948   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:33:03.010961   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20591-14371
	I0403 19:33:03.010974   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0403 19:33:03.010993   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371/.minikube (perms=drwxr-xr-x)
	I0403 19:33:03.011002   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home/jenkins
	I0403 19:33:03.011012   77599 main.go:141] libmachine: (bridge-999005) DBG | checking permissions on dir: /home
	I0403 19:33:03.011022   77599 main.go:141] libmachine: (bridge-999005) DBG | skipping /home - not owner
	I0403 19:33:03.011036   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration/20591-14371 (perms=drwxrwxr-x)
	I0403 19:33:03.011047   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0403 19:33:03.011061   77599 main.go:141] libmachine: (bridge-999005) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0403 19:33:03.011071   77599 main.go:141] libmachine: (bridge-999005) creating domain...
	I0403 19:33:03.012376   77599 main.go:141] libmachine: (bridge-999005) define libvirt domain using xml: 
	I0403 19:33:03.012401   77599 main.go:141] libmachine: (bridge-999005) <domain type='kvm'>
	I0403 19:33:03.012412   77599 main.go:141] libmachine: (bridge-999005)   <name>bridge-999005</name>
	I0403 19:33:03.012421   77599 main.go:141] libmachine: (bridge-999005)   <memory unit='MiB'>3072</memory>
	I0403 19:33:03.012429   77599 main.go:141] libmachine: (bridge-999005)   <vcpu>2</vcpu>
	I0403 19:33:03.012436   77599 main.go:141] libmachine: (bridge-999005)   <features>
	I0403 19:33:03.012444   77599 main.go:141] libmachine: (bridge-999005)     <acpi/>
	I0403 19:33:03.012452   77599 main.go:141] libmachine: (bridge-999005)     <apic/>
	I0403 19:33:03.012461   77599 main.go:141] libmachine: (bridge-999005)     <pae/>
	I0403 19:33:03.012468   77599 main.go:141] libmachine: (bridge-999005)     
	I0403 19:33:03.012473   77599 main.go:141] libmachine: (bridge-999005)   </features>
	I0403 19:33:03.012482   77599 main.go:141] libmachine: (bridge-999005)   <cpu mode='host-passthrough'>
	I0403 19:33:03.012508   77599 main.go:141] libmachine: (bridge-999005)   
	I0403 19:33:03.012524   77599 main.go:141] libmachine: (bridge-999005)   </cpu>
	I0403 19:33:03.012549   77599 main.go:141] libmachine: (bridge-999005)   <os>
	I0403 19:33:03.012572   77599 main.go:141] libmachine: (bridge-999005)     <type>hvm</type>
	I0403 19:33:03.012588   77599 main.go:141] libmachine: (bridge-999005)     <boot dev='cdrom'/>
	I0403 19:33:03.012606   77599 main.go:141] libmachine: (bridge-999005)     <boot dev='hd'/>
	I0403 19:33:03.012615   77599 main.go:141] libmachine: (bridge-999005)     <bootmenu enable='no'/>
	I0403 19:33:03.012622   77599 main.go:141] libmachine: (bridge-999005)   </os>
	I0403 19:33:03.012630   77599 main.go:141] libmachine: (bridge-999005)   <devices>
	I0403 19:33:03.012641   77599 main.go:141] libmachine: (bridge-999005)     <disk type='file' device='cdrom'>
	I0403 19:33:03.012653   77599 main.go:141] libmachine: (bridge-999005)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/boot2docker.iso'/>
	I0403 19:33:03.012670   77599 main.go:141] libmachine: (bridge-999005)       <target dev='hdc' bus='scsi'/>
	I0403 19:33:03.012679   77599 main.go:141] libmachine: (bridge-999005)       <readonly/>
	I0403 19:33:03.012701   77599 main.go:141] libmachine: (bridge-999005)     </disk>
	I0403 19:33:03.012714   77599 main.go:141] libmachine: (bridge-999005)     <disk type='file' device='disk'>
	I0403 19:33:03.012725   77599 main.go:141] libmachine: (bridge-999005)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0403 19:33:03.012745   77599 main.go:141] libmachine: (bridge-999005)       <source file='/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/bridge-999005.rawdisk'/>
	I0403 19:33:03.012755   77599 main.go:141] libmachine: (bridge-999005)       <target dev='hda' bus='virtio'/>
	I0403 19:33:03.012769   77599 main.go:141] libmachine: (bridge-999005)     </disk>
	I0403 19:33:03.012801   77599 main.go:141] libmachine: (bridge-999005)     <interface type='network'>
	I0403 19:33:03.012814   77599 main.go:141] libmachine: (bridge-999005)       <source network='mk-bridge-999005'/>
	I0403 19:33:03.012822   77599 main.go:141] libmachine: (bridge-999005)       <model type='virtio'/>
	I0403 19:33:03.012827   77599 main.go:141] libmachine: (bridge-999005)     </interface>
	I0403 19:33:03.012834   77599 main.go:141] libmachine: (bridge-999005)     <interface type='network'>
	I0403 19:33:03.012839   77599 main.go:141] libmachine: (bridge-999005)       <source network='default'/>
	I0403 19:33:03.012846   77599 main.go:141] libmachine: (bridge-999005)       <model type='virtio'/>
	I0403 19:33:03.012851   77599 main.go:141] libmachine: (bridge-999005)     </interface>
	I0403 19:33:03.012856   77599 main.go:141] libmachine: (bridge-999005)     <serial type='pty'>
	I0403 19:33:03.012863   77599 main.go:141] libmachine: (bridge-999005)       <target port='0'/>
	I0403 19:33:03.012888   77599 main.go:141] libmachine: (bridge-999005)     </serial>
	I0403 19:33:03.012900   77599 main.go:141] libmachine: (bridge-999005)     <console type='pty'>
	I0403 19:33:03.012911   77599 main.go:141] libmachine: (bridge-999005)       <target type='serial' port='0'/>
	I0403 19:33:03.012924   77599 main.go:141] libmachine: (bridge-999005)     </console>
	I0403 19:33:03.012929   77599 main.go:141] libmachine: (bridge-999005)     <rng model='virtio'>
	I0403 19:33:03.012935   77599 main.go:141] libmachine: (bridge-999005)       <backend model='random'>/dev/random</backend>
	I0403 19:33:03.012939   77599 main.go:141] libmachine: (bridge-999005)     </rng>
	I0403 19:33:03.012943   77599 main.go:141] libmachine: (bridge-999005)     
	I0403 19:33:03.012956   77599 main.go:141] libmachine: (bridge-999005)     
	I0403 19:33:03.012985   77599 main.go:141] libmachine: (bridge-999005)   </devices>
	I0403 19:33:03.013008   77599 main.go:141] libmachine: (bridge-999005) </domain>
	I0403 19:33:03.013039   77599 main.go:141] libmachine: (bridge-999005) 
	I0403 19:33:03.017100   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:76:dd:cb in network default
	I0403 19:33:03.017850   77599 main.go:141] libmachine: (bridge-999005) starting domain...
	I0403 19:33:03.017875   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:03.017883   77599 main.go:141] libmachine: (bridge-999005) ensuring networks are active...
	I0403 19:33:03.018620   77599 main.go:141] libmachine: (bridge-999005) Ensuring network default is active
	I0403 19:33:03.018960   77599 main.go:141] libmachine: (bridge-999005) Ensuring network mk-bridge-999005 is active
	I0403 19:33:03.019610   77599 main.go:141] libmachine: (bridge-999005) getting domain XML...
	I0403 19:33:03.020474   77599 main.go:141] libmachine: (bridge-999005) creating domain...
	I0403 19:33:04.308608   77599 main.go:141] libmachine: (bridge-999005) waiting for IP...
	I0403 19:33:04.309508   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:04.310076   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:04.310237   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:04.310154   77621 retry.go:31] will retry after 304.11605ms: waiting for domain to come up
	I0403 19:33:04.615460   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:04.616072   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:04.616105   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:04.616027   77621 retry.go:31] will retry after 352.836416ms: waiting for domain to come up
	I0403 19:33:04.970906   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:04.971506   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:04.971580   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:04.971492   77621 retry.go:31] will retry after 384.292797ms: waiting for domain to come up
	I0403 19:33:05.357155   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:05.357783   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:05.357804   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:05.357746   77621 retry.go:31] will retry after 593.108014ms: waiting for domain to come up
	I0403 19:33:05.953253   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:05.953908   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:05.953955   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:05.953851   77621 retry.go:31] will retry after 715.405514ms: waiting for domain to come up
	I0403 19:33:06.671416   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:06.671869   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:06.671893   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:06.671849   77621 retry.go:31] will retry after 643.974958ms: waiting for domain to come up
	I0403 19:33:07.317681   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:07.318083   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:07.318111   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:07.318044   77621 retry.go:31] will retry after 830.836827ms: waiting for domain to come up
	I0403 19:33:04.278957   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:06.279442   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:05.632586   75819 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0403 19:33:05.638039   75819 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0403 19:33:05.638061   75819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0403 19:33:05.665102   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0403 19:33:06.148083   75819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0403 19:33:06.148182   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:06.148222   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-999005 minikube.k8s.io/updated_at=2025_04_03T19_33_06_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053 minikube.k8s.io/name=flannel-999005 minikube.k8s.io/primary=true
	I0403 19:33:06.328677   75819 ops.go:34] apiserver oom_adj: -16
	I0403 19:33:06.328804   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:06.829687   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:07.329161   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:07.829420   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:08.328906   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:08.828872   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:09.328884   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:09.829539   75819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:09.973416   75819 kubeadm.go:1113] duration metric: took 3.825298406s to wait for elevateKubeSystemPrivileges
	I0403 19:33:09.973463   75819 kubeadm.go:394] duration metric: took 14.815036163s to StartCluster
	I0403 19:33:09.973485   75819 settings.go:142] acquiring lock: {Name:mk92384ef10350a2ea8b1710e0b74c72a0214398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:09.973557   75819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:33:09.974857   75819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/kubeconfig: {Name:mk73cf36c30cb8628f68d205e8cb0818a9975b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:09.975109   75819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0403 19:33:09.975113   75819 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:33:09.975194   75819 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0403 19:33:09.975291   75819 addons.go:69] Setting storage-provisioner=true in profile "flannel-999005"
	I0403 19:33:09.975313   75819 addons.go:238] Setting addon storage-provisioner=true in "flannel-999005"
	I0403 19:33:09.975344   75819 host.go:66] Checking if "flannel-999005" exists ...
	I0403 19:33:09.975339   75819 addons.go:69] Setting default-storageclass=true in profile "flannel-999005"
	I0403 19:33:09.975359   75819 config.go:182] Loaded profile config "flannel-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:09.975366   75819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-999005"
	I0403 19:33:09.975856   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:09.975875   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:09.975897   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:09.975907   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:09.976893   75819 out.go:177] * Verifying Kubernetes components...
	I0403 19:33:09.978410   75819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:09.995627   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44917
	I0403 19:33:09.995731   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46853
	I0403 19:33:09.996071   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:09.996180   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:09.996730   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:09.996748   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:09.996880   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:09.996905   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:09.997268   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:09.997310   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:09.997479   75819 main.go:141] libmachine: (flannel-999005) Calling .GetState
	I0403 19:33:09.997886   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:09.997934   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:10.001151   75819 addons.go:238] Setting addon default-storageclass=true in "flannel-999005"
	I0403 19:33:10.001199   75819 host.go:66] Checking if "flannel-999005" exists ...
	I0403 19:33:10.001557   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:10.001587   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:10.014104   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0403 19:33:10.014524   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:10.015098   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:10.015123   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:10.015470   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:10.015720   75819 main.go:141] libmachine: (flannel-999005) Calling .GetState
	I0403 19:33:10.017793   75819 main.go:141] libmachine: (flannel-999005) Calling .DriverName
	I0403 19:33:10.019942   75819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:33:10.021057   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0403 19:33:10.021158   75819 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:10.021177   75819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0403 19:33:10.021200   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHHostname
	I0403 19:33:10.021505   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:10.021986   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:10.022001   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:10.022291   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:10.022934   75819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:10.022978   75819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:10.024920   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.025474   75819 main.go:141] libmachine: (flannel-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:2c", ip: ""} in network mk-flannel-999005: {Iface:virbr4 ExpiryTime:2025-04-03 20:32:40 +0000 UTC Type:0 Mac:52:54:00:f9:eb:2c Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:flannel-999005 Clientid:01:52:54:00:f9:eb:2c}
	I0403 19:33:10.025494   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined IP address 192.168.72.34 and MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.025764   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHPort
	I0403 19:33:10.025935   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHKeyPath
	I0403 19:33:10.026060   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHUsername
	I0403 19:33:10.026152   75819 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/flannel-999005/id_rsa Username:docker}
	I0403 19:33:10.038389   75819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0403 19:33:10.038851   75819 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:10.039336   75819 main.go:141] libmachine: Using API Version  1
	I0403 19:33:10.039352   75819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:10.039758   75819 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:10.039925   75819 main.go:141] libmachine: (flannel-999005) Calling .GetState
	I0403 19:33:10.041799   75819 main.go:141] libmachine: (flannel-999005) Calling .DriverName
	I0403 19:33:10.041979   75819 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:10.041991   75819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0403 19:33:10.042006   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHHostname
	I0403 19:33:10.045247   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.045722   75819 main.go:141] libmachine: (flannel-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:eb:2c", ip: ""} in network mk-flannel-999005: {Iface:virbr4 ExpiryTime:2025-04-03 20:32:40 +0000 UTC Type:0 Mac:52:54:00:f9:eb:2c Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:flannel-999005 Clientid:01:52:54:00:f9:eb:2c}
	I0403 19:33:10.045806   75819 main.go:141] libmachine: (flannel-999005) DBG | domain flannel-999005 has defined IP address 192.168.72.34 and MAC address 52:54:00:f9:eb:2c in network mk-flannel-999005
	I0403 19:33:10.046067   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHPort
	I0403 19:33:10.046226   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHKeyPath
	I0403 19:33:10.046320   75819 main.go:141] libmachine: (flannel-999005) Calling .GetSSHUsername
	I0403 19:33:10.046486   75819 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/flannel-999005/id_rsa Username:docker}
	I0403 19:33:10.299013   75819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:10.317655   75819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:10.338018   75819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:33:10.338068   75819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0403 19:33:10.862171   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862198   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.862252   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862291   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.862394   75819 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0403 19:33:10.862529   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.862607   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.862646   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.862682   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.862685   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.862709   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862717   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.862733   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.862743   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.862756   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.863008   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.863020   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.863023   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.863228   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.863249   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.863680   75819 node_ready.go:35] waiting up to 15m0s for node "flannel-999005" to be "Ready" ...
	I0403 19:33:10.884450   75819 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:10.884469   75819 main.go:141] libmachine: (flannel-999005) Calling .Close
	I0403 19:33:10.884725   75819 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:10.884743   75819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:10.884768   75819 main.go:141] libmachine: (flannel-999005) DBG | Closing plugin on server side
	I0403 19:33:10.886301   75819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0403 19:33:08.779259   73990 pod_ready.go:103] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:10.778746   73990 pod_ready.go:93] pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.778767   73990 pod_ready.go:82] duration metric: took 38.005486758s for pod "coredns-668d6bf9bc-2vwz9" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.778775   73990 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.781400   73990 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-nthv6" not found
	I0403 19:33:10.781418   73990 pod_ready.go:82] duration metric: took 2.637243ms for pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace to be "Ready" ...
	E0403 19:33:10.781427   73990 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-nthv6" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-nthv6" not found
	I0403 19:33:10.781433   73990 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.785172   73990 pod_ready.go:93] pod "etcd-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.785192   73990 pod_ready.go:82] duration metric: took 3.752808ms for pod "etcd-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.785207   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.788834   73990 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.788850   73990 pod_ready.go:82] duration metric: took 3.634986ms for pod "kube-apiserver-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.788861   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.793809   73990 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.793831   73990 pod_ready.go:82] duration metric: took 4.96233ms for pod "kube-controller-manager-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.793843   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-mzxck" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.977165   73990 pod_ready.go:93] pod "kube-proxy-mzxck" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:10.977191   73990 pod_ready.go:82] duration metric: took 183.339442ms for pod "kube-proxy-mzxck" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:10.977209   73990 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:11.377090   73990 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:11.377122   73990 pod_ready.go:82] duration metric: took 399.903527ms for pod "kube-scheduler-enable-default-cni-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:11.377135   73990 pod_ready.go:39] duration metric: took 38.606454546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:11.377156   73990 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:33:11.377225   73990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:33:11.399542   73990 api_server.go:72] duration metric: took 38.946574315s to wait for apiserver process to appear ...
	I0403 19:33:11.399566   73990 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:33:11.399582   73990 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I0403 19:33:11.405734   73990 api_server.go:279] https://192.168.50.55:8443/healthz returned 200:
	ok
	I0403 19:33:11.406888   73990 api_server.go:141] control plane version: v1.32.2
	I0403 19:33:11.406910   73990 api_server.go:131] duration metric: took 7.338515ms to wait for apiserver health ...
	I0403 19:33:11.406918   73990 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:33:11.582871   73990 system_pods.go:59] 7 kube-system pods found
	I0403 19:33:11.582912   73990 system_pods.go:61] "coredns-668d6bf9bc-2vwz9" [e83c5e99-c2f0-4228-bc84-d048bd7dba97] Running
	I0403 19:33:11.582920   73990 system_pods.go:61] "etcd-enable-default-cni-999005" [201225ab-9372-41eb-9c78-a52f125b0435] Running
	I0403 19:33:11.582927   73990 system_pods.go:61] "kube-apiserver-enable-default-cni-999005" [f3e9e4a1-810a-423a-8e08-35d311067324] Running
	I0403 19:33:11.582933   73990 system_pods.go:61] "kube-controller-manager-enable-default-cni-999005" [0b827b54-1569-4c8e-a582-ec0fd8e97cbc] Running
	I0403 19:33:11.582938   73990 system_pods.go:61] "kube-proxy-mzxck" [6c2874ed-9e8f-4222-87c3-fe23d207134c] Running
	I0403 19:33:11.582943   73990 system_pods.go:61] "kube-scheduler-enable-default-cni-999005" [e5d0c29c-06fc-4614-a107-51917236c60c] Running
	I0403 19:33:11.582949   73990 system_pods.go:61] "storage-provisioner" [6fab90c6-1563-4504-83d8-443f80cfb99c] Running
	I0403 19:33:11.582957   73990 system_pods.go:74] duration metric: took 176.033201ms to wait for pod list to return data ...
	I0403 19:33:11.582971   73990 default_sa.go:34] waiting for default service account to be created ...
	I0403 19:33:11.777789   73990 default_sa.go:45] found service account: "default"
	I0403 19:33:11.777811   73990 default_sa.go:55] duration metric: took 194.83101ms for default service account to be created ...
	I0403 19:33:11.777819   73990 system_pods.go:116] waiting for k8s-apps to be running ...
	I0403 19:33:11.977547   73990 system_pods.go:86] 7 kube-system pods found
	I0403 19:33:11.977583   73990 system_pods.go:89] "coredns-668d6bf9bc-2vwz9" [e83c5e99-c2f0-4228-bc84-d048bd7dba97] Running
	I0403 19:33:11.977592   73990 system_pods.go:89] "etcd-enable-default-cni-999005" [201225ab-9372-41eb-9c78-a52f125b0435] Running
	I0403 19:33:11.977599   73990 system_pods.go:89] "kube-apiserver-enable-default-cni-999005" [f3e9e4a1-810a-423a-8e08-35d311067324] Running
	I0403 19:33:11.977605   73990 system_pods.go:89] "kube-controller-manager-enable-default-cni-999005" [0b827b54-1569-4c8e-a582-ec0fd8e97cbc] Running
	I0403 19:33:11.977609   73990 system_pods.go:89] "kube-proxy-mzxck" [6c2874ed-9e8f-4222-87c3-fe23d207134c] Running
	I0403 19:33:11.977615   73990 system_pods.go:89] "kube-scheduler-enable-default-cni-999005" [e5d0c29c-06fc-4614-a107-51917236c60c] Running
	I0403 19:33:11.977620   73990 system_pods.go:89] "storage-provisioner" [6fab90c6-1563-4504-83d8-443f80cfb99c] Running
	I0403 19:33:11.977629   73990 system_pods.go:126] duration metric: took 199.803644ms to wait for k8s-apps to be running ...
	I0403 19:33:11.977643   73990 system_svc.go:44] waiting for kubelet service to be running ....
	I0403 19:33:11.977695   73990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:33:11.993125   73990 system_svc.go:56] duration metric: took 15.471997ms WaitForService to wait for kubelet
	I0403 19:33:11.993158   73990 kubeadm.go:582] duration metric: took 39.540195871s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:33:11.993188   73990 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:33:12.176775   73990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:33:12.176803   73990 node_conditions.go:123] node cpu capacity is 2
	I0403 19:33:12.176814   73990 node_conditions.go:105] duration metric: took 183.620688ms to run NodePressure ...
	I0403 19:33:12.176824   73990 start.go:241] waiting for startup goroutines ...
	I0403 19:33:12.176832   73990 start.go:246] waiting for cluster config update ...
	I0403 19:33:12.176840   73990 start.go:255] writing updated cluster config ...
	I0403 19:33:12.177113   73990 ssh_runner.go:195] Run: rm -f paused
	I0403 19:33:12.225807   73990 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0403 19:33:12.228521   73990 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-999005" cluster and "default" namespace by default
	I0403 19:33:08.150408   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:08.151003   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:08.151075   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:08.150981   77621 retry.go:31] will retry after 1.152427701s: waiting for domain to come up
	I0403 19:33:09.305349   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:09.305908   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:09.305936   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:09.305883   77621 retry.go:31] will retry after 1.688969841s: waiting for domain to come up
	I0403 19:33:10.996123   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:10.996600   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:10.996677   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:10.996605   77621 retry.go:31] will retry after 1.643659414s: waiting for domain to come up
	I0403 19:33:10.887137   75819 addons.go:514] duration metric: took 911.958897ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0403 19:33:11.366941   75819 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-999005" context rescaled to 1 replicas
	I0403 19:33:12.867785   75819 node_ready.go:53] node "flannel-999005" has status "Ready":"False"
	I0403 19:33:13.333186   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:33:13.333452   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:33:12.642410   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:12.642945   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:12.642979   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:12.642914   77621 retry.go:31] will retry after 2.077428265s: waiting for domain to come up
	I0403 19:33:14.722084   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:14.722568   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:14.722595   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:14.722556   77621 retry.go:31] will retry after 2.731919508s: waiting for domain to come up
	I0403 19:33:15.367030   75819 node_ready.go:53] node "flannel-999005" has status "Ready":"False"
	I0403 19:33:15.866309   75819 node_ready.go:49] node "flannel-999005" has status "Ready":"True"
	I0403 19:33:15.866339   75819 node_ready.go:38] duration metric: took 5.002629932s for node "flannel-999005" to be "Ready" ...
	I0403 19:33:15.866351   75819 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:15.878526   75819 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:17.884431   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:17.457578   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:17.458158   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:17.458186   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:17.458134   77621 retry.go:31] will retry after 2.937911428s: waiting for domain to come up
	I0403 19:33:20.397025   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:20.397485   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find current IP address of domain bridge-999005 in network mk-bridge-999005
	I0403 19:33:20.397542   77599 main.go:141] libmachine: (bridge-999005) DBG | I0403 19:33:20.397476   77621 retry.go:31] will retry after 4.371309871s: waiting for domain to come up
	I0403 19:33:20.384008   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:22.384126   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:24.384580   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:24.771404   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.771836   77599 main.go:141] libmachine: (bridge-999005) found domain IP: 192.168.39.185
	I0403 19:33:24.771856   77599 main.go:141] libmachine: (bridge-999005) reserving static IP address...
	I0403 19:33:24.771868   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has current primary IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.772259   77599 main.go:141] libmachine: (bridge-999005) DBG | unable to find host DHCP lease matching {name: "bridge-999005", mac: "52:54:00:7a:d8:f7", ip: "192.168.39.185"} in network mk-bridge-999005
	I0403 19:33:24.855210   77599 main.go:141] libmachine: (bridge-999005) reserved static IP address 192.168.39.185 for domain bridge-999005
	I0403 19:33:24.855240   77599 main.go:141] libmachine: (bridge-999005) waiting for SSH...
	I0403 19:33:24.855250   77599 main.go:141] libmachine: (bridge-999005) DBG | Getting to WaitForSSH function...
	I0403 19:33:24.858175   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.858563   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:24.858592   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.858757   77599 main.go:141] libmachine: (bridge-999005) DBG | Using SSH client type: external
	I0403 19:33:24.858784   77599 main.go:141] libmachine: (bridge-999005) DBG | Using SSH private key: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa (-rw-------)
	I0403 19:33:24.858847   77599 main.go:141] libmachine: (bridge-999005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0403 19:33:24.858868   77599 main.go:141] libmachine: (bridge-999005) DBG | About to run SSH command:
	I0403 19:33:24.858885   77599 main.go:141] libmachine: (bridge-999005) DBG | exit 0
	I0403 19:33:24.991462   77599 main.go:141] libmachine: (bridge-999005) DBG | SSH cmd err, output: <nil>: 
	I0403 19:33:24.991735   77599 main.go:141] libmachine: (bridge-999005) KVM machine creation complete
	I0403 19:33:24.992066   77599 main.go:141] libmachine: (bridge-999005) Calling .GetConfigRaw
	I0403 19:33:24.992629   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:24.992815   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:24.992938   77599 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0403 19:33:24.992952   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:24.994308   77599 main.go:141] libmachine: Detecting operating system of created instance...
	I0403 19:33:24.994326   77599 main.go:141] libmachine: Waiting for SSH to be available...
	I0403 19:33:24.994333   77599 main.go:141] libmachine: Getting to WaitForSSH function...
	I0403 19:33:24.994341   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:24.996876   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.997275   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:24.997304   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:24.997503   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:24.997680   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:24.997873   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:24.998025   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:24.998208   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:24.998408   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:24.998420   77599 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0403 19:33:25.106052   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:33:25.106078   77599 main.go:141] libmachine: Detecting the provisioner...
	I0403 19:33:25.106088   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.109437   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.109896   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.109925   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.110110   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.110294   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.110467   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.110624   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.110813   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.111134   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.111153   77599 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0403 19:33:25.216086   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0403 19:33:25.216142   77599 main.go:141] libmachine: found compatible host: buildroot
	I0403 19:33:25.216151   77599 main.go:141] libmachine: Provisioning with buildroot...
	I0403 19:33:25.216159   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:25.216374   77599 buildroot.go:166] provisioning hostname "bridge-999005"
	I0403 19:33:25.216401   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:25.216572   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.219422   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.219818   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.219856   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.219955   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.220119   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.220285   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.220404   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.220574   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.220845   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.220870   77599 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-999005 && echo "bridge-999005" | sudo tee /etc/hostname
	I0403 19:33:25.342189   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-999005
	
	I0403 19:33:25.342213   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.344813   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.345183   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.345211   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.345371   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.345582   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.345760   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.345918   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.346073   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.346281   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.346303   77599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-999005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-999005/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-999005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0403 19:33:25.458885   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0403 19:33:25.458914   77599 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20591-14371/.minikube CaCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20591-14371/.minikube}
	I0403 19:33:25.458936   77599 buildroot.go:174] setting up certificates
	I0403 19:33:25.458946   77599 provision.go:84] configureAuth start
	I0403 19:33:25.458954   77599 main.go:141] libmachine: (bridge-999005) Calling .GetMachineName
	I0403 19:33:25.459254   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:25.461901   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.462300   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.462326   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.462424   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.464888   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.465249   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.465284   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.465492   77599 provision.go:143] copyHostCerts
	I0403 19:33:25.465551   77599 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem, removing ...
	I0403 19:33:25.465580   77599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem
	I0403 19:33:25.465662   77599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/ca.pem (1082 bytes)
	I0403 19:33:25.465795   77599 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem, removing ...
	I0403 19:33:25.465805   77599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem
	I0403 19:33:25.465835   77599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/cert.pem (1123 bytes)
	I0403 19:33:25.465951   77599 exec_runner.go:144] found /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem, removing ...
	I0403 19:33:25.465960   77599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem
	I0403 19:33:25.465984   77599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20591-14371/.minikube/key.pem (1675 bytes)
	I0403 19:33:25.466044   77599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem org=jenkins.bridge-999005 san=[127.0.0.1 192.168.39.185 bridge-999005 localhost minikube]
	I0403 19:33:25.774649   77599 provision.go:177] copyRemoteCerts
	I0403 19:33:25.774710   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0403 19:33:25.774731   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.777197   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.777576   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.777599   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.777795   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.777962   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.778108   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.778212   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:25.860653   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0403 19:33:25.882849   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0403 19:33:25.904559   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0403 19:33:25.926431   77599 provision.go:87] duration metric: took 467.475481ms to configureAuth
	I0403 19:33:25.926455   77599 buildroot.go:189] setting minikube options for container-runtime
	I0403 19:33:25.926650   77599 config.go:182] Loaded profile config "bridge-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:25.926725   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:25.929371   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.929809   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:25.929838   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:25.930028   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:25.930213   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.930335   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:25.930463   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:25.930620   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:25.930837   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:25.930859   77599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0403 19:33:26.149645   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0403 19:33:26.149674   77599 main.go:141] libmachine: Checking connection to Docker...
	I0403 19:33:26.149683   77599 main.go:141] libmachine: (bridge-999005) Calling .GetURL
	I0403 19:33:26.151048   77599 main.go:141] libmachine: (bridge-999005) DBG | using libvirt version 6000000
	I0403 19:33:26.153703   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.154090   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.154119   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.154326   77599 main.go:141] libmachine: Docker is up and running!
	I0403 19:33:26.154341   77599 main.go:141] libmachine: Reticulating splines...
	I0403 19:33:26.154349   77599 client.go:171] duration metric: took 23.685966388s to LocalClient.Create
	I0403 19:33:26.154377   77599 start.go:167] duration metric: took 23.686038349s to libmachine.API.Create "bridge-999005"
	I0403 19:33:26.154389   77599 start.go:293] postStartSetup for "bridge-999005" (driver="kvm2")
	I0403 19:33:26.154402   77599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0403 19:33:26.154427   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.154672   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0403 19:33:26.154704   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.156992   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.157408   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.157429   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.157561   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.157730   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.157866   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.157997   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:26.241074   77599 ssh_runner.go:195] Run: cat /etc/os-release
	I0403 19:33:26.245234   77599 info.go:137] Remote host: Buildroot 2023.02.9
	I0403 19:33:26.245256   77599 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/addons for local assets ...
	I0403 19:33:26.245308   77599 filesync.go:126] Scanning /home/jenkins/minikube-integration/20591-14371/.minikube/files for local assets ...
	I0403 19:33:26.245384   77599 filesync.go:149] local asset: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem -> 215522.pem in /etc/ssl/certs
	I0403 19:33:26.245467   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0403 19:33:26.255926   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:33:26.280402   77599 start.go:296] duration metric: took 125.998084ms for postStartSetup
	I0403 19:33:26.280453   77599 main.go:141] libmachine: (bridge-999005) Calling .GetConfigRaw
	I0403 19:33:26.281006   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:26.283814   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.284161   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.284198   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.284452   77599 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/config.json ...
	I0403 19:33:26.284648   77599 start.go:128] duration metric: took 23.834461991s to createHost
	I0403 19:33:26.284669   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.286766   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.287110   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.287143   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.287319   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.287485   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.287642   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.287742   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.287917   77599 main.go:141] libmachine: Using SSH client type: native
	I0403 19:33:26.288126   77599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0403 19:33:26.288141   77599 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0403 19:33:26.391168   77599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743708806.364931884
	
	I0403 19:33:26.391188   77599 fix.go:216] guest clock: 1743708806.364931884
	I0403 19:33:26.391194   77599 fix.go:229] Guest: 2025-04-03 19:33:26.364931884 +0000 UTC Remote: 2025-04-03 19:33:26.284659648 +0000 UTC m=+23.944823978 (delta=80.272236ms)
	I0403 19:33:26.391222   77599 fix.go:200] guest clock delta is within tolerance: 80.272236ms
	I0403 19:33:26.391226   77599 start.go:83] releasing machines lock for "bridge-999005", held for 23.941120784s
	I0403 19:33:26.391243   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.391495   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:26.393938   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.394286   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.394329   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.394501   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.394952   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.395143   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:26.395256   77599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0403 19:33:26.395299   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.395400   77599 ssh_runner.go:195] Run: cat /version.json
	I0403 19:33:26.395433   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:26.397923   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.398466   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.398524   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.398551   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.399177   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.399375   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.399399   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:26.399434   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:26.399582   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.399687   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:26.399711   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:26.399801   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:26.399953   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:26.400091   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:26.511483   77599 ssh_runner.go:195] Run: systemctl --version
	I0403 19:33:26.517463   77599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0403 19:33:26.670834   77599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0403 19:33:26.676690   77599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0403 19:33:26.676757   77599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0403 19:33:26.693357   77599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0403 19:33:26.693383   77599 start.go:495] detecting cgroup driver to use...
	I0403 19:33:26.693442   77599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0403 19:33:26.711536   77599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0403 19:33:26.727184   77599 docker.go:217] disabling cri-docker service (if available) ...
	I0403 19:33:26.727244   77599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0403 19:33:26.744189   77599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0403 19:33:26.758114   77599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0403 19:33:26.874699   77599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0403 19:33:27.029147   77599 docker.go:233] disabling docker service ...
	I0403 19:33:27.029214   77599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0403 19:33:27.042778   77599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0403 19:33:27.056884   77599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0403 19:33:27.165758   77599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0403 19:33:27.283993   77599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0403 19:33:27.297495   77599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0403 19:33:27.315338   77599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0403 19:33:27.315392   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.325005   77599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0403 19:33:27.325056   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.334776   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.345113   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.355007   77599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0403 19:33:27.364955   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.374894   77599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.391740   77599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0403 19:33:27.401813   77599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0403 19:33:27.411004   77599 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0403 19:33:27.411051   77599 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0403 19:33:27.423701   77599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0403 19:33:27.432566   77599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:27.549830   77599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0403 19:33:27.639431   77599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0403 19:33:27.639494   77599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0403 19:33:27.644011   77599 start.go:563] Will wait 60s for crictl version
	I0403 19:33:27.644059   77599 ssh_runner.go:195] Run: which crictl
	I0403 19:33:27.647488   77599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0403 19:33:27.684002   77599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0403 19:33:27.684079   77599 ssh_runner.go:195] Run: crio --version
	I0403 19:33:27.714223   77599 ssh_runner.go:195] Run: crio --version
	I0403 19:33:27.741585   77599 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0403 19:33:26.884187   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:28.885446   75819 pod_ready.go:103] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:30.384628   75819 pod_ready.go:93] pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.384654   75819 pod_ready.go:82] duration metric: took 14.506093364s for pod "coredns-668d6bf9bc-qxf6t" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.384666   75819 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.391041   75819 pod_ready.go:93] pod "etcd-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.391069   75819 pod_ready.go:82] duration metric: took 6.395099ms for pod "etcd-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.391082   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.396442   75819 pod_ready.go:93] pod "kube-apiserver-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.396465   75819 pod_ready.go:82] duration metric: took 5.374496ms for pod "kube-apiserver-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.396475   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.403106   75819 pod_ready.go:93] pod "kube-controller-manager-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.403125   75819 pod_ready.go:82] duration metric: took 6.641201ms for pod "kube-controller-manager-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.403137   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5wp5x" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.407151   75819 pod_ready.go:93] pod "kube-proxy-5wp5x" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.407185   75819 pod_ready.go:82] duration metric: took 4.039313ms for pod "kube-proxy-5wp5x" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.407197   75819 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.782264   75819 pod_ready.go:93] pod "kube-scheduler-flannel-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:30.782294   75819 pod_ready.go:82] duration metric: took 375.086145ms for pod "kube-scheduler-flannel-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:30.782309   75819 pod_ready.go:39] duration metric: took 14.915929273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:30.782329   75819 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:33:30.782393   75819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:33:30.798036   75819 api_server.go:72] duration metric: took 20.822884639s to wait for apiserver process to appear ...
	I0403 19:33:30.798067   75819 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:33:30.798089   75819 api_server.go:253] Checking apiserver healthz at https://192.168.72.34:8443/healthz ...
	I0403 19:33:30.803997   75819 api_server.go:279] https://192.168.72.34:8443/healthz returned 200:
	ok
	I0403 19:33:30.805211   75819 api_server.go:141] control plane version: v1.32.2
	I0403 19:33:30.805239   75819 api_server.go:131] duration metric: took 7.159207ms to wait for apiserver health ...
	I0403 19:33:30.805248   75819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:33:30.983942   75819 system_pods.go:59] 7 kube-system pods found
	I0403 19:33:30.984001   75819 system_pods.go:61] "coredns-668d6bf9bc-qxf6t" [c2f4058a-3dd8-4489-8fbc-05a2270375e4] Running
	I0403 19:33:30.984009   75819 system_pods.go:61] "etcd-flannel-999005" [67a1995c-eb31-4f43-85dc-abe52818818b] Running
	I0403 19:33:30.984015   75819 system_pods.go:61] "kube-apiserver-flannel-999005" [3b6f77fb-86b6-4f3a-91d7-ae7b58f084f8] Running
	I0403 19:33:30.984021   75819 system_pods.go:61] "kube-controller-manager-flannel-999005" [344cd255-fe98-41ef-818b-e79c931c72c3] Running
	I0403 19:33:30.984026   75819 system_pods.go:61] "kube-proxy-5wp5x" [e3f733e6-641a-4c29-94e7-a11cca7d4707] Running
	I0403 19:33:30.984035   75819 system_pods.go:61] "kube-scheduler-flannel-999005" [8a6014ba-ea10-4d6e-8e23-708cabaaeac9] Running
	I0403 19:33:30.984040   75819 system_pods.go:61] "storage-provisioner" [6785981d-1626-4f5a-ab63-000a23fcdce1] Running
	I0403 19:33:30.984048   75819 system_pods.go:74] duration metric: took 178.79249ms to wait for pod list to return data ...
	I0403 19:33:30.984056   75819 default_sa.go:34] waiting for default service account to be created ...
	I0403 19:33:31.182732   75819 default_sa.go:45] found service account: "default"
	I0403 19:33:31.182760   75819 default_sa.go:55] duration metric: took 198.696832ms for default service account to be created ...
	I0403 19:33:31.182774   75819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0403 19:33:31.385033   75819 system_pods.go:86] 7 kube-system pods found
	I0403 19:33:31.385057   75819 system_pods.go:89] "coredns-668d6bf9bc-qxf6t" [c2f4058a-3dd8-4489-8fbc-05a2270375e4] Running
	I0403 19:33:31.385062   75819 system_pods.go:89] "etcd-flannel-999005" [67a1995c-eb31-4f43-85dc-abe52818818b] Running
	I0403 19:33:31.385066   75819 system_pods.go:89] "kube-apiserver-flannel-999005" [3b6f77fb-86b6-4f3a-91d7-ae7b58f084f8] Running
	I0403 19:33:31.385069   75819 system_pods.go:89] "kube-controller-manager-flannel-999005" [344cd255-fe98-41ef-818b-e79c931c72c3] Running
	I0403 19:33:31.385073   75819 system_pods.go:89] "kube-proxy-5wp5x" [e3f733e6-641a-4c29-94e7-a11cca7d4707] Running
	I0403 19:33:31.385076   75819 system_pods.go:89] "kube-scheduler-flannel-999005" [8a6014ba-ea10-4d6e-8e23-708cabaaeac9] Running
	I0403 19:33:31.385079   75819 system_pods.go:89] "storage-provisioner" [6785981d-1626-4f5a-ab63-000a23fcdce1] Running
	I0403 19:33:31.385085   75819 system_pods.go:126] duration metric: took 202.306181ms to wait for k8s-apps to be running ...
	I0403 19:33:31.385091   75819 system_svc.go:44] waiting for kubelet service to be running ....
	I0403 19:33:31.385126   75819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:33:31.404702   75819 system_svc.go:56] duration metric: took 19.600688ms WaitForService to wait for kubelet
	I0403 19:33:31.404730   75819 kubeadm.go:582] duration metric: took 21.4295849s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:33:31.404750   75819 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:33:31.582762   75819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:33:31.582801   75819 node_conditions.go:123] node cpu capacity is 2
	I0403 19:33:31.582836   75819 node_conditions.go:105] duration metric: took 178.062088ms to run NodePressure ...
	I0403 19:33:31.582854   75819 start.go:241] waiting for startup goroutines ...
	I0403 19:33:31.582869   75819 start.go:246] waiting for cluster config update ...
	I0403 19:33:31.582887   75819 start.go:255] writing updated cluster config ...
	I0403 19:33:31.583197   75819 ssh_runner.go:195] Run: rm -f paused
	I0403 19:33:31.635619   75819 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0403 19:33:31.638459   75819 out.go:177] * Done! kubectl is now configured to use "flannel-999005" cluster and "default" namespace by default
	I0403 19:33:27.742812   77599 main.go:141] libmachine: (bridge-999005) Calling .GetIP
	I0403 19:33:27.745608   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:27.745919   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:27.745942   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:27.746168   77599 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0403 19:33:27.751053   77599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:33:27.764022   77599 kubeadm.go:883] updating cluster {Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0403 19:33:27.764144   77599 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 19:33:27.764216   77599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:33:27.796330   77599 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0403 19:33:27.796388   77599 ssh_runner.go:195] Run: which lz4
	I0403 19:33:27.800001   77599 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0403 19:33:27.803844   77599 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0403 19:33:27.803872   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0403 19:33:29.013823   77599 crio.go:462] duration metric: took 1.21384319s to copy over tarball
	I0403 19:33:29.013908   77599 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0403 19:33:31.265429   77599 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25149294s)
	I0403 19:33:31.265456   77599 crio.go:469] duration metric: took 2.251598795s to extract the tarball
	I0403 19:33:31.265466   77599 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0403 19:33:31.311717   77599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0403 19:33:31.357972   77599 crio.go:514] all images are preloaded for cri-o runtime.
	I0403 19:33:31.357990   77599 cache_images.go:84] Images are preloaded, skipping loading
	I0403 19:33:31.357996   77599 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.32.2 crio true true} ...
	I0403 19:33:31.358074   77599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-999005 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0403 19:33:31.358151   77599 ssh_runner.go:195] Run: crio config
	I0403 19:33:31.405178   77599 cni.go:84] Creating CNI manager for "bridge"
	I0403 19:33:31.405201   77599 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0403 19:33:31.405225   77599 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-999005 NodeName:bridge-999005 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0403 19:33:31.405365   77599 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-999005"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0403 19:33:31.405440   77599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0403 19:33:31.414987   77599 binaries.go:44] Found k8s binaries, skipping transfer
	I0403 19:33:31.415051   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0403 19:33:31.423910   77599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0403 19:33:31.440728   77599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0403 19:33:31.457926   77599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0403 19:33:31.473099   77599 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0403 19:33:31.476839   77599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0403 19:33:31.489178   77599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:31.648751   77599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:33:31.669990   77599 certs.go:68] Setting up /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005 for IP: 192.168.39.185
	I0403 19:33:31.670005   77599 certs.go:194] generating shared ca certs ...
	I0403 19:33:31.670019   77599 certs.go:226] acquiring lock for ca certs: {Name:mk34f09724355b828d31eb1ee97771fd3b4645c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:31.670173   77599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key
	I0403 19:33:31.670222   77599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key
	I0403 19:33:31.670233   77599 certs.go:256] generating profile certs ...
	I0403 19:33:31.670294   77599 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.key
	I0403 19:33:31.670311   77599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt with IP's: []
	I0403 19:33:31.786831   77599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt ...
	I0403 19:33:31.786859   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.crt: {Name:mkf649d0c8846125bd9d91dd0614dd3edfd43b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:31.787055   77599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.key ...
	I0403 19:33:31.787070   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/client.key: {Name:mkea47be4f98d7242ecb2031208f90bf3ddcfbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:31.787180   77599 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7
	I0403 19:33:31.787196   77599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185]
	I0403 19:33:32.247425   77599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7 ...
	I0403 19:33:32.247474   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7: {Name:mkb6bfa4c7f67a4ee70ff58016a1c305b43c986d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.247650   77599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7 ...
	I0403 19:33:32.247672   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7: {Name:mk32e06deb5b5d3858815a6cc3fd3d129517ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.247754   77599 certs.go:381] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt.62ed2cb7 -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt
	I0403 19:33:32.247827   77599 certs.go:385] copying /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key.62ed2cb7 -> /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key
	I0403 19:33:32.247877   77599 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key
	I0403 19:33:32.247891   77599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt with IP's: []
	I0403 19:33:32.541993   77599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt ...
	I0403 19:33:32.542032   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt: {Name:mka4e60c00e3edab5ba1c58c999a89035bcada4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.542254   77599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key ...
	I0403 19:33:32.542274   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key: {Name:mkde5f934453d4d4ad6f3ee32b9cd909c8295965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:32.542504   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem (1338 bytes)
	W0403 19:33:32.542553   77599 certs.go:480] ignoring /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552_empty.pem, impossibly tiny 0 bytes
	I0403 19:33:32.542568   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca-key.pem (1675 bytes)
	I0403 19:33:32.542598   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/ca.pem (1082 bytes)
	I0403 19:33:32.542631   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/cert.pem (1123 bytes)
	I0403 19:33:32.542662   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/certs/key.pem (1675 bytes)
	I0403 19:33:32.542713   77599 certs.go:484] found cert: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem (1708 bytes)
	I0403 19:33:32.543437   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0403 19:33:32.573758   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0403 19:33:32.607840   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0403 19:33:32.640302   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0403 19:33:32.664859   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0403 19:33:32.688081   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0403 19:33:32.713262   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0403 19:33:32.738235   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/bridge-999005/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0403 19:33:32.760858   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/ssl/certs/215522.pem --> /usr/share/ca-certificates/215522.pem (1708 bytes)
	I0403 19:33:32.785677   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0403 19:33:32.812357   77599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20591-14371/.minikube/certs/21552.pem --> /usr/share/ca-certificates/21552.pem (1338 bytes)
	I0403 19:33:32.837494   77599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0403 19:33:32.855867   77599 ssh_runner.go:195] Run: openssl version
	I0403 19:33:32.861693   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215522.pem && ln -fs /usr/share/ca-certificates/215522.pem /etc/ssl/certs/215522.pem"
	I0403 19:33:32.873958   77599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215522.pem
	I0403 19:33:32.878670   77599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  3 18:20 /usr/share/ca-certificates/215522.pem
	I0403 19:33:32.878720   77599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215522.pem
	I0403 19:33:32.884412   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215522.pem /etc/ssl/certs/3ec20f2e.0"
	I0403 19:33:32.895046   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0403 19:33:32.907127   77599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:33:32.911596   77599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  3 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:33:32.911653   77599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0403 19:33:32.917387   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0403 19:33:32.929021   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21552.pem && ln -fs /usr/share/ca-certificates/21552.pem /etc/ssl/certs/21552.pem"
	I0403 19:33:32.939538   77599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21552.pem
	I0403 19:33:32.943923   77599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  3 18:20 /usr/share/ca-certificates/21552.pem
	I0403 19:33:32.944004   77599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21552.pem
	I0403 19:33:32.949423   77599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21552.pem /etc/ssl/certs/51391683.0"
	I0403 19:33:32.960722   77599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0403 19:33:32.965345   77599 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0403 19:33:32.965401   77599 kubeadm.go:392] StartCluster: {Name:bridge-999005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-999005 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 19:33:32.965483   77599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0403 19:33:32.965542   77599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0403 19:33:33.006784   77599 cri.go:89] found id: ""
	I0403 19:33:33.006867   77599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0403 19:33:33.020183   77599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0403 19:33:33.032692   77599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0403 19:33:33.044354   77599 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0403 19:33:33.044374   77599 kubeadm.go:157] found existing configuration files:
	
	I0403 19:33:33.044424   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0403 19:33:33.054955   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0403 19:33:33.055012   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0403 19:33:33.065535   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0403 19:33:33.075309   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0403 19:33:33.075362   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0403 19:33:33.084429   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0403 19:33:33.094442   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0403 19:33:33.094494   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0403 19:33:33.104926   77599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0403 19:33:33.113846   77599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0403 19:33:33.113901   77599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0403 19:33:33.123447   77599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0403 19:33:33.175768   77599 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0403 19:33:33.175858   77599 kubeadm.go:310] [preflight] Running pre-flight checks
	I0403 19:33:33.283828   77599 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0403 19:33:33.283918   77599 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0403 19:33:33.284054   77599 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0403 19:33:33.292775   77599 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0403 19:33:33.394356   77599 out.go:235]   - Generating certificates and keys ...
	I0403 19:33:33.394483   77599 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0403 19:33:33.394561   77599 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0403 19:33:33.485736   77599 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0403 19:33:33.658670   77599 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0403 19:33:33.890328   77599 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0403 19:33:34.033068   77599 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0403 19:33:34.206188   77599 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0403 19:33:34.206439   77599 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-999005 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0403 19:33:34.284743   77599 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0403 19:33:34.285173   77599 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-999005 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0403 19:33:34.392026   77599 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0403 19:33:34.810433   77599 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0403 19:33:35.031395   77599 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0403 19:33:35.031595   77599 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0403 19:33:35.090736   77599 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0403 19:33:35.311577   77599 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0403 19:33:35.707554   77599 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0403 19:33:35.820376   77599 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0403 19:33:35.956268   77599 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0403 19:33:35.956874   77599 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0403 19:33:35.959282   77599 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0403 19:33:35.961148   77599 out.go:235]   - Booting up control plane ...
	I0403 19:33:35.961289   77599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0403 19:33:35.961399   77599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0403 19:33:35.961510   77599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0403 19:33:35.976979   77599 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0403 19:33:35.984810   77599 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0403 19:33:35.984907   77599 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0403 19:33:36.127595   77599 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0403 19:33:36.127753   77599 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0403 19:33:37.628536   77599 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502119988s
	I0403 19:33:37.628648   77599 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0403 19:33:42.629743   77599 kubeadm.go:310] [api-check] The API server is healthy after 5.001769611s
	I0403 19:33:42.644211   77599 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0403 19:33:42.657726   77599 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0403 19:33:42.676447   77599 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0403 19:33:42.676702   77599 kubeadm.go:310] [mark-control-plane] Marking the node bridge-999005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0403 19:33:42.687306   77599 kubeadm.go:310] [bootstrap-token] Using token: fq7src.0us7ohixvgrd79kz
	I0403 19:33:42.688455   77599 out.go:235]   - Configuring RBAC rules ...
	I0403 19:33:42.688598   77599 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0403 19:33:42.699921   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0403 19:33:42.705060   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0403 19:33:42.708286   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0403 19:33:42.711842   77599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0403 19:33:42.714732   77599 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0403 19:33:43.034566   77599 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0403 19:33:43.461914   77599 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0403 19:33:44.038634   77599 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0403 19:33:44.038659   77599 kubeadm.go:310] 
	I0403 19:33:44.038745   77599 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0403 19:33:44.038755   77599 kubeadm.go:310] 
	I0403 19:33:44.038871   77599 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0403 19:33:44.038881   77599 kubeadm.go:310] 
	I0403 19:33:44.038916   77599 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0403 19:33:44.039008   77599 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0403 19:33:44.039100   77599 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0403 19:33:44.039134   77599 kubeadm.go:310] 
	I0403 19:33:44.039222   77599 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0403 19:33:44.039235   77599 kubeadm.go:310] 
	I0403 19:33:44.039297   77599 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0403 19:33:44.039307   77599 kubeadm.go:310] 
	I0403 19:33:44.039378   77599 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0403 19:33:44.039475   77599 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0403 19:33:44.039566   77599 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0403 19:33:44.039577   77599 kubeadm.go:310] 
	I0403 19:33:44.039690   77599 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0403 19:33:44.039800   77599 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0403 19:33:44.039812   77599 kubeadm.go:310] 
	I0403 19:33:44.039932   77599 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fq7src.0us7ohixvgrd79kz \
	I0403 19:33:44.040071   77599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 \
	I0403 19:33:44.040122   77599 kubeadm.go:310] 	--control-plane 
	I0403 19:33:44.040136   77599 kubeadm.go:310] 
	I0403 19:33:44.040260   77599 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0403 19:33:44.040279   77599 kubeadm.go:310] 
	I0403 19:33:44.040382   77599 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fq7src.0us7ohixvgrd79kz \
	I0403 19:33:44.040526   77599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:35050fe5fde5fe52a7642dddc546544927ee76d3bd6b50fca421627715d868f7 
	I0403 19:33:44.042310   77599 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:33:44.042339   77599 cni.go:84] Creating CNI manager for "bridge"
	I0403 19:33:44.044752   77599 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0403 19:33:44.046058   77599 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0403 19:33:44.056620   77599 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0403 19:33:44.072775   77599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0403 19:33:44.072865   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:44.072907   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-999005 minikube.k8s.io/updated_at=2025_04_03T19_33_44_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=85c3996d13eba09c6359a027222288a9aec57053 minikube.k8s.io/name=bridge-999005 minikube.k8s.io/primary=true
	I0403 19:33:44.091241   77599 ops.go:34] apiserver oom_adj: -16
	I0403 19:33:44.213492   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:44.713802   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:45.214487   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:45.714490   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:46.213775   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:46.714137   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:47.214234   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:47.714484   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:48.214082   77599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0403 19:33:48.316673   77599 kubeadm.go:1113] duration metric: took 4.243867048s to wait for elevateKubeSystemPrivileges
	I0403 19:33:48.316706   77599 kubeadm.go:394] duration metric: took 15.351310395s to StartCluster
	I0403 19:33:48.316727   77599 settings.go:142] acquiring lock: {Name:mk92384ef10350a2ea8b1710e0b74c72a0214398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:48.316801   77599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:33:48.317861   77599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-14371/kubeconfig: {Name:mk73cf36c30cb8628f68d205e8cb0818a9975b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 19:33:48.318088   77599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0403 19:33:48.318097   77599 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0403 19:33:48.318175   77599 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0403 19:33:48.318244   77599 addons.go:69] Setting storage-provisioner=true in profile "bridge-999005"
	I0403 19:33:48.318265   77599 addons.go:238] Setting addon storage-provisioner=true in "bridge-999005"
	I0403 19:33:48.318297   77599 host.go:66] Checking if "bridge-999005" exists ...
	I0403 19:33:48.318313   77599 addons.go:69] Setting default-storageclass=true in profile "bridge-999005"
	I0403 19:33:48.318298   77599 config.go:182] Loaded profile config "bridge-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:33:48.318356   77599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-999005"
	I0403 19:33:48.318770   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.318796   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.318776   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.318879   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.319539   77599 out.go:177] * Verifying Kubernetes components...
	I0403 19:33:48.321103   77599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0403 19:33:48.336019   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0403 19:33:48.336019   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0403 19:33:48.336447   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.336540   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.336979   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.336996   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.337098   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.337121   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.337332   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.337465   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.337538   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:48.338013   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.338065   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.340961   77599 addons.go:238] Setting addon default-storageclass=true in "bridge-999005"
	I0403 19:33:48.340999   77599 host.go:66] Checking if "bridge-999005" exists ...
	I0403 19:33:48.341322   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.341365   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.355048   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39811
	I0403 19:33:48.355610   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.356196   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.356226   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.356592   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.356792   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:48.356827   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0403 19:33:48.357305   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.357816   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.357835   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.358248   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.358722   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:48.358870   77599 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20591-14371/.minikube/bin/docker-machine-driver-kvm2
	I0403 19:33:48.358911   77599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:33:48.360538   77599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0403 19:33:48.361702   77599 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:48.361718   77599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0403 19:33:48.361733   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:48.365062   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.365531   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:48.365554   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.365701   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:48.365870   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:48.366032   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:48.366166   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:48.374675   77599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I0403 19:33:48.375202   77599 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:33:48.375806   77599 main.go:141] libmachine: Using API Version  1
	I0403 19:33:48.375835   77599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:33:48.376141   77599 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:33:48.376322   77599 main.go:141] libmachine: (bridge-999005) Calling .GetState
	I0403 19:33:48.378097   77599 main.go:141] libmachine: (bridge-999005) Calling .DriverName
	I0403 19:33:48.378291   77599 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:48.378302   77599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0403 19:33:48.378314   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHHostname
	I0403 19:33:48.381118   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.381622   77599 main.go:141] libmachine: (bridge-999005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:d8:f7", ip: ""} in network mk-bridge-999005: {Iface:virbr1 ExpiryTime:2025-04-03 20:33:18 +0000 UTC Type:0 Mac:52:54:00:7a:d8:f7 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:bridge-999005 Clientid:01:52:54:00:7a:d8:f7}
	I0403 19:33:48.381645   77599 main.go:141] libmachine: (bridge-999005) DBG | domain bridge-999005 has defined IP address 192.168.39.185 and MAC address 52:54:00:7a:d8:f7 in network mk-bridge-999005
	I0403 19:33:48.381846   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHPort
	I0403 19:33:48.382025   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHKeyPath
	I0403 19:33:48.382166   77599 main.go:141] libmachine: (bridge-999005) Calling .GetSSHUsername
	I0403 19:33:48.382292   77599 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/bridge-999005/id_rsa Username:docker}
	I0403 19:33:48.586906   77599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0403 19:33:48.586933   77599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0403 19:33:48.720936   77599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0403 19:33:48.723342   77599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0403 19:33:49.076492   77599 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0403 19:33:49.076540   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.076560   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.076816   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.076831   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.076840   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.076848   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.077211   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.077226   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.077254   77599 main.go:141] libmachine: (bridge-999005) DBG | Closing plugin on server side
	I0403 19:33:49.077567   77599 node_ready.go:35] waiting up to 15m0s for node "bridge-999005" to be "Ready" ...
	I0403 19:33:49.095818   77599 node_ready.go:49] node "bridge-999005" has status "Ready":"True"
	I0403 19:33:49.095840   77599 node_ready.go:38] duration metric: took 18.234764ms for node "bridge-999005" to be "Ready" ...
	I0403 19:33:49.095851   77599 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:49.103291   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.103309   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.103560   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.103582   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.103585   77599 main.go:141] libmachine: (bridge-999005) DBG | Closing plugin on server side
	I0403 19:33:49.106640   77599 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:49.381709   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.381734   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.382012   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.382029   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.382037   77599 main.go:141] libmachine: Making call to close driver server
	I0403 19:33:49.382044   77599 main.go:141] libmachine: (bridge-999005) Calling .Close
	I0403 19:33:49.382304   77599 main.go:141] libmachine: (bridge-999005) DBG | Closing plugin on server side
	I0403 19:33:49.382308   77599 main.go:141] libmachine: Successfully made call to close driver server
	I0403 19:33:49.382332   77599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0403 19:33:49.383772   77599 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0403 19:33:49.384901   77599 addons.go:514] duration metric: took 1.066742014s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0403 19:33:49.580077   77599 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-999005" context rescaled to 1 replicas
	I0403 19:33:51.111757   77599 pod_ready.go:103] pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:52.112437   77599 pod_ready.go:93] pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:52.112460   77599 pod_ready.go:82] duration metric: took 3.005799611s for pod "coredns-668d6bf9bc-d2sp8" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:52.112469   77599 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:52.114218   77599 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-s979x" not found
	I0403 19:33:52.114244   77599 pod_ready.go:82] duration metric: took 1.768553ms for pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace to be "Ready" ...
	E0403 19:33:52.114257   77599 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-s979x" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-s979x" not found
	I0403 19:33:52.114267   77599 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:53.332014   66718 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0403 19:33:53.332308   66718 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0403 19:33:53.332328   66718 kubeadm.go:310] 
	I0403 19:33:53.332364   66718 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0403 19:33:53.332399   66718 kubeadm.go:310] 		timed out waiting for the condition
	I0403 19:33:53.332406   66718 kubeadm.go:310] 
	I0403 19:33:53.332435   66718 kubeadm.go:310] 	This error is likely caused by:
	I0403 19:33:53.332465   66718 kubeadm.go:310] 		- The kubelet is not running
	I0403 19:33:53.332560   66718 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0403 19:33:53.332566   66718 kubeadm.go:310] 
	I0403 19:33:53.332655   66718 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0403 19:33:53.332718   66718 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0403 19:33:53.332781   66718 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0403 19:33:53.332790   66718 kubeadm.go:310] 
	I0403 19:33:53.332922   66718 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0403 19:33:53.333025   66718 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0403 19:33:53.333033   66718 kubeadm.go:310] 
	I0403 19:33:53.333168   66718 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0403 19:33:53.333296   66718 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0403 19:33:53.333410   66718 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0403 19:33:53.333518   66718 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0403 19:33:53.333528   66718 kubeadm.go:310] 
	I0403 19:33:53.334367   66718 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0403 19:33:53.334492   66718 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0403 19:33:53.334554   66718 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0403 19:33:53.334604   66718 kubeadm.go:394] duration metric: took 7m59.310981648s to StartCluster
	I0403 19:33:53.334636   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0403 19:33:53.334685   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0403 19:33:53.373643   66718 cri.go:89] found id: ""
	I0403 19:33:53.373669   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.373682   66718 logs.go:284] No container was found matching "kube-apiserver"
	I0403 19:33:53.373689   66718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0403 19:33:53.373736   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0403 19:33:53.403561   66718 cri.go:89] found id: ""
	I0403 19:33:53.403587   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.403595   66718 logs.go:284] No container was found matching "etcd"
	I0403 19:33:53.403600   66718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0403 19:33:53.403655   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0403 19:33:53.433381   66718 cri.go:89] found id: ""
	I0403 19:33:53.433411   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.433420   66718 logs.go:284] No container was found matching "coredns"
	I0403 19:33:53.433427   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0403 19:33:53.433480   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0403 19:33:53.464729   66718 cri.go:89] found id: ""
	I0403 19:33:53.464758   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.464769   66718 logs.go:284] No container was found matching "kube-scheduler"
	I0403 19:33:53.464775   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0403 19:33:53.464843   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0403 19:33:53.495666   66718 cri.go:89] found id: ""
	I0403 19:33:53.495697   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.495708   66718 logs.go:284] No container was found matching "kube-proxy"
	I0403 19:33:53.495715   66718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0403 19:33:53.495782   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0403 19:33:53.527704   66718 cri.go:89] found id: ""
	I0403 19:33:53.527730   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.527739   66718 logs.go:284] No container was found matching "kube-controller-manager"
	I0403 19:33:53.527747   66718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0403 19:33:53.527804   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0403 19:33:53.567852   66718 cri.go:89] found id: ""
	I0403 19:33:53.567874   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.567881   66718 logs.go:284] No container was found matching "kindnet"
	I0403 19:33:53.567887   66718 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0403 19:33:53.567943   66718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0403 19:33:53.597334   66718 cri.go:89] found id: ""
	I0403 19:33:53.597363   66718 logs.go:282] 0 containers: []
	W0403 19:33:53.597374   66718 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0403 19:33:53.597386   66718 logs.go:123] Gathering logs for kubelet ...
	I0403 19:33:53.597399   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0403 19:33:53.653211   66718 logs.go:123] Gathering logs for dmesg ...
	I0403 19:33:53.653246   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0403 19:33:53.666175   66718 logs.go:123] Gathering logs for describe nodes ...
	I0403 19:33:53.666201   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0403 19:33:53.736375   66718 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0403 19:33:53.736397   66718 logs.go:123] Gathering logs for CRI-O ...
	I0403 19:33:53.736409   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0403 19:33:53.837412   66718 logs.go:123] Gathering logs for container status ...
	I0403 19:33:53.837449   66718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0403 19:33:53.876433   66718 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0403 19:33:53.876481   66718 out.go:270] * 
	W0403 19:33:53.876533   66718 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:33:53.876547   66718 out.go:270] * 
	W0403 19:33:53.877616   66718 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0403 19:33:53.880186   66718 out.go:201] 
	W0403 19:33:53.881256   66718 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0403 19:33:53.881290   66718 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0403 19:33:53.881311   66718 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0403 19:33:53.882318   66718 out.go:201] 
	I0403 19:33:54.120332   77599 pod_ready.go:103] pod "etcd-bridge-999005" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:56.122064   77599 pod_ready.go:103] pod "etcd-bridge-999005" in "kube-system" namespace has status "Ready":"False"
	I0403 19:33:58.119737   77599 pod_ready.go:93] pod "etcd-bridge-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:58.119764   77599 pod_ready.go:82] duration metric: took 6.005488859s for pod "etcd-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.119775   77599 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.123208   77599 pod_ready.go:93] pod "kube-apiserver-bridge-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:58.123232   77599 pod_ready.go:82] duration metric: took 3.448838ms for pod "kube-apiserver-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.123245   77599 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.126391   77599 pod_ready.go:93] pod "kube-controller-manager-bridge-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:58.126411   77599 pod_ready.go:82] duration metric: took 3.157876ms for pod "kube-controller-manager-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.126422   77599 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-kp7mg" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.129660   77599 pod_ready.go:93] pod "kube-proxy-kp7mg" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:58.129677   77599 pod_ready.go:82] duration metric: took 3.247584ms for pod "kube-proxy-kp7mg" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.129688   77599 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.133889   77599 pod_ready.go:93] pod "kube-scheduler-bridge-999005" in "kube-system" namespace has status "Ready":"True"
	I0403 19:33:58.133911   77599 pod_ready.go:82] duration metric: took 4.215142ms for pod "kube-scheduler-bridge-999005" in "kube-system" namespace to be "Ready" ...
	I0403 19:33:58.133921   77599 pod_ready.go:39] duration metric: took 9.038057268s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0403 19:33:58.133939   77599 api_server.go:52] waiting for apiserver process to appear ...
	I0403 19:33:58.133987   77599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 19:33:58.148976   77599 api_server.go:72] duration metric: took 9.830850735s to wait for apiserver process to appear ...
	I0403 19:33:58.149002   77599 api_server.go:88] waiting for apiserver healthz status ...
	I0403 19:33:58.149021   77599 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0403 19:33:58.152765   77599 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0403 19:33:58.153801   77599 api_server.go:141] control plane version: v1.32.2
	I0403 19:33:58.153825   77599 api_server.go:131] duration metric: took 4.814693ms to wait for apiserver health ...
	I0403 19:33:58.153833   77599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0403 19:33:58.318924   77599 system_pods.go:59] 7 kube-system pods found
	I0403 19:33:58.318971   77599 system_pods.go:61] "coredns-668d6bf9bc-d2sp8" [22f55c40-046d-4876-870a-29a97951f661] Running
	I0403 19:33:58.318980   77599 system_pods.go:61] "etcd-bridge-999005" [10bef341-2f47-418c-93ed-0e09236c9fb8] Running
	I0403 19:33:58.318986   77599 system_pods.go:61] "kube-apiserver-bridge-999005" [c0986f2f-42ad-4c25-bcfc-306c002e19a1] Running
	I0403 19:33:58.318992   77599 system_pods.go:61] "kube-controller-manager-bridge-999005" [d202b4b9-ea4e-4685-9f14-81f090e0d7d7] Running
	I0403 19:33:58.319003   77599 system_pods.go:61] "kube-proxy-kp7mg" [2b5f323f-0954-4bf4-8fde-0574c17c9e0b] Running
	I0403 19:33:58.319008   77599 system_pods.go:61] "kube-scheduler-bridge-999005" [2dc43204-833f-4d34-bd9d-20426247559e] Running
	I0403 19:33:58.319015   77599 system_pods.go:61] "storage-provisioner" [0e4050fe-17bb-4246-a551-61dcdd16389c] Running
	I0403 19:33:58.319023   77599 system_pods.go:74] duration metric: took 165.18288ms to wait for pod list to return data ...
	I0403 19:33:58.319034   77599 default_sa.go:34] waiting for default service account to be created ...
	I0403 19:33:58.517854   77599 default_sa.go:45] found service account: "default"
	I0403 19:33:58.517883   77599 default_sa.go:55] duration metric: took 198.841522ms for default service account to be created ...
	I0403 19:33:58.517895   77599 system_pods.go:116] waiting for k8s-apps to be running ...
	I0403 19:33:58.720732   77599 system_pods.go:86] 7 kube-system pods found
	I0403 19:33:58.720761   77599 system_pods.go:89] "coredns-668d6bf9bc-d2sp8" [22f55c40-046d-4876-870a-29a97951f661] Running
	I0403 19:33:58.720769   77599 system_pods.go:89] "etcd-bridge-999005" [10bef341-2f47-418c-93ed-0e09236c9fb8] Running
	I0403 19:33:58.720775   77599 system_pods.go:89] "kube-apiserver-bridge-999005" [c0986f2f-42ad-4c25-bcfc-306c002e19a1] Running
	I0403 19:33:58.720780   77599 system_pods.go:89] "kube-controller-manager-bridge-999005" [d202b4b9-ea4e-4685-9f14-81f090e0d7d7] Running
	I0403 19:33:58.720785   77599 system_pods.go:89] "kube-proxy-kp7mg" [2b5f323f-0954-4bf4-8fde-0574c17c9e0b] Running
	I0403 19:33:58.720789   77599 system_pods.go:89] "kube-scheduler-bridge-999005" [2dc43204-833f-4d34-bd9d-20426247559e] Running
	I0403 19:33:58.720794   77599 system_pods.go:89] "storage-provisioner" [0e4050fe-17bb-4246-a551-61dcdd16389c] Running
	I0403 19:33:58.720803   77599 system_pods.go:126] duration metric: took 202.901205ms to wait for k8s-apps to be running ...
	I0403 19:33:58.720811   77599 system_svc.go:44] waiting for kubelet service to be running ....
	I0403 19:33:58.720857   77599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 19:33:58.736498   77599 system_svc.go:56] duration metric: took 15.680603ms WaitForService to wait for kubelet
	I0403 19:33:58.736522   77599 kubeadm.go:582] duration metric: took 10.418400754s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0403 19:33:58.736539   77599 node_conditions.go:102] verifying NodePressure condition ...
	I0403 19:33:58.918022   77599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0403 19:33:58.918053   77599 node_conditions.go:123] node cpu capacity is 2
	I0403 19:33:58.918067   77599 node_conditions.go:105] duration metric: took 181.522606ms to run NodePressure ...
	I0403 19:33:58.918081   77599 start.go:241] waiting for startup goroutines ...
	I0403 19:33:58.918091   77599 start.go:246] waiting for cluster config update ...
	I0403 19:33:58.918111   77599 start.go:255] writing updated cluster config ...
	I0403 19:33:58.918438   77599 ssh_runner.go:195] Run: rm -f paused
	I0403 19:33:58.966577   77599 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0403 19:33:58.968374   77599 out.go:177] * Done! kubectl is now configured to use "bridge-999005" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.026987889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743709735026958108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12a619a7-12fc-4ed5-a86a-edfe967a41eb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.027431219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a67d3302-8f41-4c93-818b-e79f14c6f125 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.027494092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a67d3302-8f41-4c93-818b-e79f14c6f125 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.027538265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a67d3302-8f41-4c93-818b-e79f14c6f125 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.056131850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9d9dea8-5105-40cc-bd53-3d89e01f6906 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.056222334Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9d9dea8-5105-40cc-bd53-3d89e01f6906 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.057302652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fbe77d0-05da-4d0b-baf6-749179dc75ec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.057694175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743709735057669844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fbe77d0-05da-4d0b-baf6-749179dc75ec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.058201597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95ca5323-d5fb-44fd-8731-343ad688c28b name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.058250473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95ca5323-d5fb-44fd-8731-343ad688c28b name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.058287607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=95ca5323-d5fb-44fd-8731-343ad688c28b name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.087454330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fc1315a-de5f-4bcd-b923-fc69e8322797 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.087543932Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fc1315a-de5f-4bcd-b923-fc69e8322797 name=/runtime.v1.RuntimeService/Version
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.089226715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0f1f5f8-7f8f-45a8-a1d4-6addc84bd927 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.089621182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743709735089597063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0f1f5f8-7f8f-45a8-a1d4-6addc84bd927 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.090236648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a227120-3823-4bd8-90df-f932d94965a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.090294342Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a227120-3823-4bd8-90df-f932d94965a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.090334158Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7a227120-3823-4bd8-90df-f932d94965a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.120959819Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a121b8b-c00d-43bf-a82b-28affa3ef93f name=/runtime.v1.RuntimeService/Version
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.121053729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a121b8b-c00d-43bf-a82b-28affa3ef93f name=/runtime.v1.RuntimeService/Version
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.122500339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=527c4264-1da8-48ee-be9c-b3a155283ab1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.122934888Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743709735122915816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=527c4264-1da8-48ee-be9c-b3a155283ab1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.123380146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8bb3025-a362-4ac1-a001-222789710241 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.123447471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8bb3025-a362-4ac1-a001-222789710241 name=/runtime.v1.RuntimeService/ListContainers
	Apr 03 19:48:55 old-k8s-version-471019 crio[636]: time="2025-04-03 19:48:55.123478884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f8bb3025-a362-4ac1-a001-222789710241 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 3 19:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052726] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041853] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.065841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.955511] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.571384] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.620728] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.063202] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054417] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.185024] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.123908] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.218372] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +6.279584] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.069499] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.643502] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[Apr 3 19:26] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 3 19:30] systemd-fstab-generator[5045]: Ignoring "noauto" option for root device
	[Apr 3 19:31] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.102429] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:48:55 up 23 min,  0 users,  load average: 0.19, 0.09, 0.02
	Linux old-k8s-version-471019 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000995950)
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]: goroutine 160 [select]:
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00074def0, 0x4f0ac20, 0xc0008b7630, 0x1, 0xc0001020c0)
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00096c540, 0xc0001020c0)
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bf73a0, 0xc0002ae760)
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7166]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 03 19:48:51 old-k8s-version-471019 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 03 19:48:51 old-k8s-version-471019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 03 19:48:51 old-k8s-version-471019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 176.
	Apr 03 19:48:51 old-k8s-version-471019 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 03 19:48:51 old-k8s-version-471019 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7175]: I0403 19:48:51.948656    7175 server.go:416] Version: v1.20.0
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7175]: I0403 19:48:51.948933    7175 server.go:837] Client rotation is on, will bootstrap in background
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7175]: I0403 19:48:51.950632    7175 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7175]: I0403 19:48:51.951533    7175 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 03 19:48:51 old-k8s-version-471019 kubelet[7175]: W0403 19:48:51.951573    7175 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-471019 -n old-k8s-version-471019
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 2 (221.156236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-471019" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (358.80s)

                                                
                                    

Test pass (271/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.85
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.32.2/json-events 13.42
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.13
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 117.49
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 134.5
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 10.48
35 TestAddons/parallel/Registry 56.49
37 TestAddons/parallel/InspektorGadget 10.7
38 TestAddons/parallel/MetricsServer 6.7
40 TestAddons/parallel/CSI 83.06
41 TestAddons/parallel/Headlamp 64.64
42 TestAddons/parallel/CloudSpanner 5.53
43 TestAddons/parallel/LocalPath 57.02
44 TestAddons/parallel/NvidiaDevicePlugin 6.61
45 TestAddons/parallel/Yakd 11.8
47 TestAddons/StoppedEnableDisable 91.21
48 TestCertOptions 99.17
49 TestCertExpiration 272.55
51 TestForceSystemdFlag 80.11
52 TestForceSystemdEnv 42.02
54 TestKVMDriverInstallOrUpdate 6.48
58 TestErrorSpam/setup 42.58
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.53
62 TestErrorSpam/unpause 1.66
63 TestErrorSpam/stop 4.66
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 90.16
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 62.41
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
75 TestFunctional/serial/CacheCmd/cache/add_local 2.08
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.04
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 32.38
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.28
86 TestFunctional/serial/LogsFileCmd 1.33
87 TestFunctional/serial/InvalidService 3.79
89 TestFunctional/parallel/ConfigCmd 0.31
90 TestFunctional/parallel/DashboardCmd 15.18
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.8
97 TestFunctional/parallel/ServiceCmdConnect 22.49
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 44.36
101 TestFunctional/parallel/SSHCmd 0.39
102 TestFunctional/parallel/CpCmd 1.21
103 TestFunctional/parallel/MySQL 23.58
104 TestFunctional/parallel/FileSync 0.19
105 TestFunctional/parallel/CertSync 1.22
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
113 TestFunctional/parallel/License 0.58
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
127 TestFunctional/parallel/ImageCommands/ImageBuild 3.85
128 TestFunctional/parallel/ImageCommands/Setup 1.86
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
134 TestFunctional/parallel/ProfileCmd/profile_list 0.3
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.2
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.55
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.82
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.76
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.92
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.45
142 TestFunctional/parallel/ServiceCmd/DeployApp 6.14
143 TestFunctional/parallel/ServiceCmd/List 0.47
144 TestFunctional/parallel/MountCmd/any-port 9.29
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
147 TestFunctional/parallel/ServiceCmd/Format 0.3
148 TestFunctional/parallel/ServiceCmd/URL 0.33
149 TestFunctional/parallel/Version/short 0.05
150 TestFunctional/parallel/Version/components 0.63
151 TestFunctional/parallel/MountCmd/specific-port 1.74
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 195.25
161 TestMultiControlPlane/serial/DeployApp 6.74
162 TestMultiControlPlane/serial/PingHostFromPods 1.1
163 TestMultiControlPlane/serial/AddWorkerNode 58.72
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
166 TestMultiControlPlane/serial/CopyFile 12.55
167 TestMultiControlPlane/serial/StopSecondaryNode 91.58
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.62
169 TestMultiControlPlane/serial/RestartSecondaryNode 52.75
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 427.9
172 TestMultiControlPlane/serial/DeleteSecondaryNode 17.66
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.58
174 TestMultiControlPlane/serial/StopCluster 272.64
175 TestMultiControlPlane/serial/RestartCluster 146.98
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
177 TestMultiControlPlane/serial/AddSecondaryNode 76.87
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
182 TestJSONOutput/start/Command 89.55
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.64
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.57
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.35
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.18
210 TestMainNoArgs 0.04
211 TestMinikubeProfile 86.17
214 TestMountStart/serial/StartWithMountFirst 29.08
215 TestMountStart/serial/VerifyMountFirst 0.36
216 TestMountStart/serial/StartWithMountSecond 29.7
217 TestMountStart/serial/VerifyMountSecond 0.36
218 TestMountStart/serial/DeleteFirst 0.87
219 TestMountStart/serial/VerifyMountPostDelete 0.36
220 TestMountStart/serial/Stop 1.26
221 TestMountStart/serial/RestartStopped 23.2
222 TestMountStart/serial/VerifyMountPostStop 0.37
225 TestMultiNode/serial/FreshStart2Nodes 116.23
226 TestMultiNode/serial/DeployApp2Nodes 6.02
227 TestMultiNode/serial/PingHostFrom2Pods 0.73
228 TestMultiNode/serial/AddNode 47.35
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.55
231 TestMultiNode/serial/CopyFile 6.87
232 TestMultiNode/serial/StopNode 2.21
233 TestMultiNode/serial/StartAfterStop 39.31
234 TestMultiNode/serial/RestartKeepsNodes 338.38
235 TestMultiNode/serial/DeleteNode 2.64
236 TestMultiNode/serial/StopMultiNode 181.6
237 TestMultiNode/serial/RestartMultiNode 114.52
238 TestMultiNode/serial/ValidateNameConflict 46.77
245 TestScheduledStopUnix 116.64
249 TestRunningBinaryUpgrade 221.8
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 94.05
256 TestNoKubernetes/serial/StartWithStopK8s 18.24
257 TestNoKubernetes/serial/Start 25.11
266 TestPause/serial/Start 72.5
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
268 TestNoKubernetes/serial/ProfileList 1.3
269 TestNoKubernetes/serial/Stop 1.29
270 TestNoKubernetes/serial/StartNoArgs 45.14
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
272 TestStoppedBinaryUpgrade/Setup 2.32
273 TestStoppedBinaryUpgrade/Upgrade 100.62
282 TestNetworkPlugins/group/false 5.42
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
290 TestStartStop/group/embed-certs/serial/FirstStart 65.73
292 TestStartStop/group/no-preload/serial/FirstStart 114.73
293 TestStartStop/group/embed-certs/serial/DeployApp 9.31
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
295 TestStartStop/group/embed-certs/serial/Stop 90.81
296 TestStartStop/group/no-preload/serial/DeployApp 11.27
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
298 TestStartStop/group/no-preload/serial/Stop 91.3
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.38
301 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
302 TestStartStop/group/embed-certs/serial/SecondStart 345.52
303 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
305 TestStartStop/group/default-k8s-diff-port/serial/Stop 90.8
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 350.71
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 299.77
312 TestStartStop/group/old-k8s-version/serial/Stop 2.29
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
315 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
317 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
318 TestStartStop/group/embed-certs/serial/Pause 2.59
320 TestStartStop/group/newest-cni/serial/FirstStart 49.25
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
323 TestStartStop/group/newest-cni/serial/Stop 7.32
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/newest-cni/serial/SecondStart 38.01
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
329 TestStartStop/group/no-preload/serial/Pause 2.62
330 TestNetworkPlugins/group/auto/Start 83.41
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
333 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
334 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.75
335 TestNetworkPlugins/group/kindnet/Start 73.61
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
339 TestStartStop/group/newest-cni/serial/Pause 2.34
340 TestNetworkPlugins/group/calico/Start 110.25
341 TestNetworkPlugins/group/auto/KubeletFlags 0.21
342 TestNetworkPlugins/group/auto/NetCatPod 11.24
343 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
344 TestNetworkPlugins/group/auto/DNS 0.16
345 TestNetworkPlugins/group/auto/Localhost 0.12
346 TestNetworkPlugins/group/auto/HairPin 0.12
347 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
348 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
349 TestNetworkPlugins/group/custom-flannel/Start 72.78
350 TestNetworkPlugins/group/kindnet/DNS 0.15
351 TestNetworkPlugins/group/kindnet/Localhost 0.13
352 TestNetworkPlugins/group/kindnet/HairPin 0.12
353 TestNetworkPlugins/group/enable-default-cni/Start 93.9
354 TestNetworkPlugins/group/calico/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/KubeletFlags 0.27
356 TestNetworkPlugins/group/calico/NetCatPod 14.48
357 TestNetworkPlugins/group/calico/DNS 0.15
358 TestNetworkPlugins/group/calico/Localhost 0.13
359 TestNetworkPlugins/group/calico/HairPin 0.14
360 TestNetworkPlugins/group/flannel/Start 66.36
361 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
362 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
363 TestNetworkPlugins/group/custom-flannel/DNS 0.14
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
366 TestNetworkPlugins/group/bridge/Start 56.66
367 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
368 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.24
369 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
370 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
371 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
374 TestNetworkPlugins/group/flannel/NetCatPod 11.27
375 TestNetworkPlugins/group/flannel/DNS 0.17
376 TestNetworkPlugins/group/flannel/Localhost 0.12
377 TestNetworkPlugins/group/flannel/HairPin 0.12
379 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
380 TestNetworkPlugins/group/bridge/NetCatPod 11.22
381 TestNetworkPlugins/group/bridge/DNS 0.13
382 TestNetworkPlugins/group/bridge/Localhost 0.11
383 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (24.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-286102 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-286102 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.847058753s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0403 18:12:04.372747   21552 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0403 18:12:04.372853   21552 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-286102
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-286102: exit status 85 (56.106686ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-286102 | jenkins | v1.35.0 | 03 Apr 25 18:11 UTC |          |
	|         | -p download-only-286102        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 18:11:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 18:11:39.564024   21564 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:11:39.564122   21564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:11:39.564131   21564 out.go:358] Setting ErrFile to fd 2...
	I0403 18:11:39.564134   21564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:11:39.564323   21564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	W0403 18:11:39.564423   21564 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20591-14371/.minikube/config/config.json: open /home/jenkins/minikube-integration/20591-14371/.minikube/config/config.json: no such file or directory
	I0403 18:11:39.564948   21564 out.go:352] Setting JSON to true
	I0403 18:11:39.565770   21564 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3245,"bootTime":1743700655,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 18:11:39.565819   21564 start.go:139] virtualization: kvm guest
	I0403 18:11:39.567900   21564 out.go:97] [download-only-286102] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0403 18:11:39.568000   21564 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball: no such file or directory
	I0403 18:11:39.568030   21564 notify.go:220] Checking for updates...
	I0403 18:11:39.569023   21564 out.go:169] MINIKUBE_LOCATION=20591
	I0403 18:11:39.570114   21564 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 18:11:39.571099   21564 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 18:11:39.572099   21564 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 18:11:39.573089   21564 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0403 18:11:39.574854   21564 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0403 18:11:39.575084   21564 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 18:11:39.667348   21564 out.go:97] Using the kvm2 driver based on user configuration
	I0403 18:11:39.667374   21564 start.go:297] selected driver: kvm2
	I0403 18:11:39.667381   21564 start.go:901] validating driver "kvm2" against <nil>
	I0403 18:11:39.667677   21564 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 18:11:39.667784   21564 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 18:11:39.681769   21564 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0403 18:11:39.681807   21564 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0403 18:11:39.682284   21564 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0403 18:11:39.682433   21564 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0403 18:11:39.682457   21564 cni.go:84] Creating CNI manager for ""
	I0403 18:11:39.682501   21564 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 18:11:39.682510   21564 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0403 18:11:39.682550   21564 start.go:340] cluster config:
	{Name:download-only-286102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-286102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 18:11:39.682702   21564 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 18:11:39.684281   21564 out.go:97] Downloading VM boot image ...
	I0403 18:11:39.684309   21564 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20591-14371/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0403 18:11:49.485511   21564 out.go:97] Starting "download-only-286102" primary control-plane node in "download-only-286102" cluster
	I0403 18:11:49.485540   21564 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0403 18:11:49.576868   21564 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0403 18:11:49.576904   21564 cache.go:56] Caching tarball of preloaded images
	I0403 18:11:49.577079   21564 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0403 18:11:49.578588   21564 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0403 18:11:49.578604   21564 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0403 18:11:49.676302   21564 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-286102 host does not exist
	  To start a cluster, run: "minikube start -p download-only-286102"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-286102
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (13.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-304015 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-304015 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.419478318s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (13.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0403 18:12:18.096116   21552 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0403 18:12:18.096166   21552 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-304015
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-304015: exit status 85 (55.957895ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-286102 | jenkins | v1.35.0 | 03 Apr 25 18:11 UTC |                     |
	|         | -p download-only-286102        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC | 03 Apr 25 18:12 UTC |
	| delete  | -p download-only-286102        | download-only-286102 | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC | 03 Apr 25 18:12 UTC |
	| start   | -o=json --download-only        | download-only-304015 | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC |                     |
	|         | -p download-only-304015        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 18:12:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 18:12:04.713677   21812 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:12:04.713925   21812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:12:04.713936   21812 out.go:358] Setting ErrFile to fd 2...
	I0403 18:12:04.713942   21812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:12:04.714170   21812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 18:12:04.714739   21812 out.go:352] Setting JSON to true
	I0403 18:12:04.715602   21812 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3270,"bootTime":1743700655,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 18:12:04.715699   21812 start.go:139] virtualization: kvm guest
	I0403 18:12:04.717598   21812 out.go:97] [download-only-304015] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 18:12:04.717712   21812 notify.go:220] Checking for updates...
	I0403 18:12:04.718910   21812 out.go:169] MINIKUBE_LOCATION=20591
	I0403 18:12:04.720008   21812 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 18:12:04.720994   21812 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 18:12:04.721918   21812 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 18:12:04.722786   21812 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0403 18:12:04.724469   21812 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0403 18:12:04.724724   21812 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 18:12:04.755336   21812 out.go:97] Using the kvm2 driver based on user configuration
	I0403 18:12:04.755362   21812 start.go:297] selected driver: kvm2
	I0403 18:12:04.755368   21812 start.go:901] validating driver "kvm2" against <nil>
	I0403 18:12:04.755666   21812 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 18:12:04.755737   21812 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-14371/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 18:12:04.770176   21812 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0403 18:12:04.770213   21812 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0403 18:12:04.770667   21812 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0403 18:12:04.770798   21812 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0403 18:12:04.770839   21812 cni.go:84] Creating CNI manager for ""
	I0403 18:12:04.770890   21812 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0403 18:12:04.770902   21812 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0403 18:12:04.770955   21812 start.go:340] cluster config:
	{Name:download-only-304015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-304015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 18:12:04.771046   21812 iso.go:125] acquiring lock: {Name:mkf6f50c7ede9611c6490e7e32606ab15d31c93b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 18:12:04.772500   21812 out.go:97] Starting "download-only-304015" primary control-plane node in "download-only-304015" cluster
	I0403 18:12:04.772512   21812 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 18:12:05.290614   21812 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0403 18:12:05.290646   21812 cache.go:56] Caching tarball of preloaded images
	I0403 18:12:05.290776   21812 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0403 18:12:05.292509   21812 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0403 18:12:05.292522   21812 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0403 18:12:05.389340   21812 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:a1ce605168a895ad5f3b3c8db1fe4d66 -> /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0403 18:12:14.693468   21812 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0403 18:12:14.693557   21812 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20591-14371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-304015 host does not exist
	  To start a cluster, run: "minikube start -p download-only-304015"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-304015
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I0403 18:12:18.648058   21552 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-586392 --alsologtostderr --binary-mirror http://127.0.0.1:42171 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-586392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-586392
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (117.49s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-427421 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-427421 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m56.460410948s)
helpers_test.go:175: Cleaning up "offline-crio-427421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-427421
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-427421: (1.028618243s)
--- PASS: TestOffline (117.49s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-445082
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-445082: exit status 85 (47.417204ms)

                                                
                                                
-- stdout --
	* Profile "addons-445082" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-445082"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-445082
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-445082: exit status 85 (47.936965ms)

                                                
                                                
-- stdout --
	* Profile "addons-445082" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-445082"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (134.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-445082 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-445082 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m14.504589536s)
--- PASS: TestAddons/Setup (134.50s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-445082 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-445082 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-445082 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-445082 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [37f82061-84c9-4077-b38d-c8cf2a067e89] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [37f82061-84c9-4077-b38d-c8cf2a067e89] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.002896553s
addons_test.go:633: (dbg) Run:  kubectl --context addons-445082 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-445082 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-445082 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (56.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 28.211535ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-f7fn6" [db35ba87-f90e-477b-a105-2bde628b1715] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004188662s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gnl8c" [9d95bda0-c392-473d-b05c-01546c40ea02] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003884869s
addons_test.go:331: (dbg) Run:  kubectl --context addons-445082 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-445082 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-445082 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (43.739310074s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 ip
2025/04/03 18:15:49 [DEBUG] GET http://192.168.39.130:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (56.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8lfbl" [8459a80a-d50d-487c-b0be-5d43e7e8c4b1] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004053188s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-445082 addons disable inspektor-gadget --alsologtostderr -v=1: (5.691489175s)
--- PASS: TestAddons/parallel/InspektorGadget (10.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.370106ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-6zvlq" [fe49f00e-6163-4052-914d-5a02c1f44677] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002984975s
addons_test.go:402: (dbg) Run:  kubectl --context addons-445082 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (83.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0403 18:14:59.803769   21552 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0403 18:14:59.826573   21552 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0403 18:14:59.826596   21552 kapi.go:107] duration metric: took 22.84028ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 22.851574ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-445082 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-445082 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a52b6e28-3095-4550-8cf1-cbd05316d537] Pending
helpers_test.go:344: "task-pv-pod" [a52b6e28-3095-4550-8cf1-cbd05316d537] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a52b6e28-3095-4550-8cf1-cbd05316d537] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 48.003459979s
addons_test.go:511: (dbg) Run:  kubectl --context addons-445082 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-445082 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-445082 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-445082 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-445082 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-445082 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-445082 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [df74d25c-71ab-453e-9ace-e4db8520fb30] Pending
helpers_test.go:344: "task-pv-pod-restore" [df74d25c-71ab-453e-9ace-e4db8520fb30] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [df74d25c-71ab-453e-9ace-e4db8520fb30] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.004025982s
addons_test.go:553: (dbg) Run:  kubectl --context addons-445082 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-445082 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-445082 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-445082 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.666065737s)
--- PASS: TestAddons/parallel/CSI (83.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (64.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-445082 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-4zbkj" [f0d15f45-5123-4d08-ac43-770688cc8763] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-4zbkj" [f0d15f45-5123-4d08-ac43-770688cc8763] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 57.002823399s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-445082 addons disable headlamp --alsologtostderr -v=1: (6.830053972s)
--- PASS: TestAddons/parallel/Headlamp (64.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-kxr27" [d6662cac-9d76-4f51-856c-3a1bedd4b751] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003352713s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.02s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-445082 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-445082 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445082 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [df190e08-1faf-44fd-81b1-0489969137b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [df190e08-1faf-44fd-81b1-0489969137b2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [df190e08-1faf-44fd-81b1-0489969137b2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003928102s
addons_test.go:906: (dbg) Run:  kubectl --context addons-445082 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 ssh "cat /opt/local-path-provisioner/pvc-c189de65-8aca-4e94-9ce0-37185dfffce6_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-445082 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-445082 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-445082 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.249188619s)
--- PASS: TestAddons/parallel/LocalPath (57.02s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-d5kr9" [84407566-6cac-4282-8a90-7dc046450e7c] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003145303s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-vqp58" [d695566a-00c6-4789-b7b6-8acd2a9a7894] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003221537s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-445082 addons disable yakd --alsologtostderr -v=1: (5.793905656s)
--- PASS: TestAddons/parallel/Yakd (11.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-445082
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-445082: (1m30.937266256s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-445082
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-445082
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-445082
--- PASS: TestAddons/StoppedEnableDisable (91.21s)

                                                
                                    
x
+
TestCertOptions (99.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-528707 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-528707 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m37.680097509s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-528707 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-528707 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-528707 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-528707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-528707
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-528707: (1.017002879s)
--- PASS: TestCertOptions (99.17s)

                                                
                                    
x
+
TestCertExpiration (272.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-954352 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
I0403 19:17:29.985894   21552 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0403 19:17:32.121166   21552 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0403 19:17:32.155989   21552 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0403 19:17:32.156024   21552 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0403 19:17:32.156099   21552 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0403 19:17:32.156136   21552 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1585875892/002/docker-machine-driver-kvm2
I0403 19:17:32.189303   21552 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1585875892/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0007e34e8 gz:0xc0007e3570 tar:0xc0007e3520 tar.bz2:0xc0007e3530 tar.gz:0xc0007e3540 tar.xz:0xc0007e3550 tar.zst:0xc0007e3560 tbz2:0xc0007e3530 tgz:0xc0007e3540 txz:0xc0007e3550 tzst:0xc0007e3560 xz:0xc0007e3578 zip:0xc0007e3580 zst:0xc0007e3590] Getters:map[file:0xc001d15450 http:0xc0006ba5f0 https:0xc0006ba640] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0403 19:17:32.189353   21552 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1585875892/002/docker-machine-driver-kvm2
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-954352 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m0.238233939s)
E0403 19:18:52.325673   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-954352 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-954352 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (31.347144685s)
helpers_test.go:175: Cleaning up "cert-expiration-954352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-954352
--- PASS: TestCertExpiration (272.55s)

                                                
                                    
x
+
TestForceSystemdFlag (80.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-426227 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-426227 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.914379326s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-426227 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-426227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-426227
--- PASS: TestForceSystemdFlag (80.11s)

                                                
                                    
x
+
TestForceSystemdEnv (42.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-653812 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-653812 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.063516227s)
helpers_test.go:175: Cleaning up "force-systemd-env-653812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-653812
--- PASS: TestForceSystemdEnv (42.02s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.48s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0403 19:17:27.629157   21552 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0403 19:17:27.629316   21552 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0403 19:17:27.663677   21552 install.go:62] docker-machine-driver-kvm2: exit status 1
W0403 19:17:27.663870   21552 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0403 19:17:27.663930   21552 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1585875892/001/docker-machine-driver-kvm2
I0403 19:17:27.879920   21552 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1585875892/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0007e34e8 gz:0xc0007e3570 tar:0xc0007e3520 tar.bz2:0xc0007e3530 tar.gz:0xc0007e3540 tar.xz:0xc0007e3550 tar.zst:0xc0007e3560 tbz2:0xc0007e3530 tgz:0xc0007e3540 txz:0xc0007e3550 tzst:0xc0007e3560 xz:0xc0007e3578 zip:0xc0007e3580 zst:0xc0007e3590] Getters:map[file:0xc001d14540 http:0xc000a00640 https:0xc000a00690] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0403 19:17:27.879962   21552 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1585875892/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (6.48s)

                                                
                                    
x
+
TestErrorSpam/setup (42.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-347043 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-347043 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-347043 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-347043 --driver=kvm2  --container-runtime=crio: (42.582507719s)
--- PASS: TestErrorSpam/setup (42.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (4.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 stop: (1.578512523s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 stop: (1.52558066s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-347043 --log_dir /tmp/nospam-347043 stop: (1.551986624s)
--- PASS: TestErrorSpam/stop (4.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20591-14371/.minikube/files/etc/test/nested/copy/21552/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (90.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789300 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-789300 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m30.157997277s)
--- PASS: TestFunctional/serial/StartWithProxy (90.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (62.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0403 18:22:19.290401   21552 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789300 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-789300 --alsologtostderr -v=8: (1m2.40848964s)
functional_test.go:680: soft start took 1m2.409124327s for "functional-789300" cluster.
I0403 18:23:21.699185   21552 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (62.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-789300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 cache add registry.k8s.io/pause:3.1: (1.125220747s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 cache add registry.k8s.io/pause:3.3: (1.220506837s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 cache add registry.k8s.io/pause:latest: (1.141462479s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-789300 /tmp/TestFunctionalserialCacheCmdcacheadd_local1421767060/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 cache add minikube-local-cache-test:functional-789300
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 cache add minikube-local-cache-test:functional-789300: (1.787751752s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 cache delete minikube-local-cache-test:functional-789300
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-789300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (195.2955ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 cache reload: (1.024625411s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 kubectl -- --context functional-789300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-789300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-789300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.383410264s)
functional_test.go:778: restart took 32.383505615s for "functional-789300" cluster.
I0403 18:24:02.008461   21552 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (32.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-789300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 logs: (1.282500784s)
--- PASS: TestFunctional/serial/LogsCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 logs --file /tmp/TestFunctionalserialLogsFileCmd1287639129/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 logs --file /tmp/TestFunctionalserialLogsFileCmd1287639129/001/logs.txt: (1.329790879s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-789300 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-789300
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-789300: exit status 115 (253.111676ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.170:31245 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-789300 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789300 config get cpus: exit status 14 (50.251339ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789300 config get cpus: exit status 14 (49.319879ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-789300 --alsologtostderr -v=1]
E0403 18:24:34.727500   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-789300 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 30690: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.18s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
E0403 18:24:34.482994   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:24:34.565296   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-789300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (145.83304ms)

                                                
                                                
-- stdout --
	* [functional-789300] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 18:24:34.511978   30407 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:24:34.512328   30407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:24:34.512342   30407 out.go:358] Setting ErrFile to fd 2...
	I0403 18:24:34.512348   30407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:24:34.512677   30407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 18:24:34.513382   30407 out.go:352] Setting JSON to false
	I0403 18:24:34.514641   30407 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4019,"bootTime":1743700655,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 18:24:34.514768   30407 start.go:139] virtualization: kvm guest
	I0403 18:24:34.516954   30407 out.go:177] * [functional-789300] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 18:24:34.518381   30407 notify.go:220] Checking for updates...
	I0403 18:24:34.518429   30407 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 18:24:34.520119   30407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 18:24:34.521564   30407 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 18:24:34.522652   30407 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 18:24:34.523723   30407 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 18:24:34.524742   30407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 18:24:34.526356   30407 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 18:24:34.527262   30407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:24:34.527318   30407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:24:34.544210   30407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I0403 18:24:34.544613   30407 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:24:34.545066   30407 main.go:141] libmachine: Using API Version  1
	I0403 18:24:34.545105   30407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:24:34.545401   30407 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:24:34.545571   30407 main.go:141] libmachine: (functional-789300) Calling .DriverName
	I0403 18:24:34.545861   30407 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 18:24:34.546316   30407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:24:34.546358   30407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:24:34.561745   30407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35893
	I0403 18:24:34.562096   30407 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:24:34.562516   30407 main.go:141] libmachine: Using API Version  1
	I0403 18:24:34.562533   30407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:24:34.562895   30407 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:24:34.563066   30407 main.go:141] libmachine: (functional-789300) Calling .DriverName
	I0403 18:24:34.595593   30407 out.go:177] * Using the kvm2 driver based on existing profile
	I0403 18:24:34.596731   30407 start.go:297] selected driver: kvm2
	I0403 18:24:34.596756   30407 start.go:901] validating driver "kvm2" against &{Name:functional-789300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-789300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 18:24:34.596845   30407 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 18:24:34.598684   30407 out.go:201] 
	W0403 18:24:34.599722   30407 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0403 18:24:34.600777   30407 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789300 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-789300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-789300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (145.472552ms)

                                                
                                                
-- stdout --
	* [functional-789300] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 18:24:32.692286   29960 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:24:32.692406   29960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:24:32.692418   29960 out.go:358] Setting ErrFile to fd 2...
	I0403 18:24:32.692423   29960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:24:32.692808   29960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 18:24:32.693504   29960 out.go:352] Setting JSON to false
	I0403 18:24:32.694778   29960 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4018,"bootTime":1743700655,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 18:24:32.694870   29960 start.go:139] virtualization: kvm guest
	I0403 18:24:32.696607   29960 out.go:177] * [functional-789300] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0403 18:24:32.697920   29960 notify.go:220] Checking for updates...
	I0403 18:24:32.697946   29960 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 18:24:32.699347   29960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 18:24:32.700496   29960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 18:24:32.701810   29960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 18:24:32.703130   29960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 18:24:32.704337   29960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 18:24:32.706054   29960 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 18:24:32.706672   29960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:24:32.706730   29960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:24:32.727248   29960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38461
	I0403 18:24:32.727735   29960 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:24:32.728333   29960 main.go:141] libmachine: Using API Version  1
	I0403 18:24:32.728364   29960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:24:32.728745   29960 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:24:32.729038   29960 main.go:141] libmachine: (functional-789300) Calling .DriverName
	I0403 18:24:32.729305   29960 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 18:24:32.729581   29960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:24:32.729639   29960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:24:32.745727   29960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0403 18:24:32.746121   29960 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:24:32.746647   29960 main.go:141] libmachine: Using API Version  1
	I0403 18:24:32.746685   29960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:24:32.747039   29960 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:24:32.747294   29960 main.go:141] libmachine: (functional-789300) Calling .DriverName
	I0403 18:24:32.779253   29960 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0403 18:24:32.780437   29960 start.go:297] selected driver: kvm2
	I0403 18:24:32.780452   29960 start.go:901] validating driver "kvm2" against &{Name:functional-789300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-789300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 18:24:32.780546   29960 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 18:24:32.782290   29960 out.go:201] 
	W0403 18:24:32.783582   29960 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0403 18:24:32.784637   29960 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
E0403 18:24:34.401529   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:24:34.407880   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:24:34.419263   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-789300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-789300 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-qp2qb" [be7a852f-ba86-4ae4-919c-997180ac3a14] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-qp2qb" [be7a852f-ba86-4ae4-919c-997180ac3a14] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.00324541s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.170:30835
functional_test.go:1692: http://192.168.39.170:30835: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-qp2qb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.170:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.170:30835
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8c886e47-3795-4343-bc7c-05e0a48d2b44] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003651573s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-789300 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-789300 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-789300 get pvc myclaim -o=json
I0403 18:24:15.720305   21552 retry.go:31] will retry after 1.520293027s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:374c3e12-603e-472c-9403-0324f530bde7 ResourceVersion:828 Generation:0 CreationTimestamp:2025-04-03 18:24:15 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-374c3e12-603e-472c-9403-0324f530bde7 StorageClassName:0xc001e24060 VolumeMode:0xc001e24070 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-789300 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-789300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [084cac51-973e-4da8-89c0-23704ae88343] Pending
helpers_test.go:344: "sp-pod" [084cac51-973e-4da8-89c0-23704ae88343] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [084cac51-973e-4da8-89c0-23704ae88343] Running
E0403 18:24:34.441414   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.003675238s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-789300 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-789300 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-789300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bb409360-1aee-4fab-89ce-cb18d6657085] Pending
helpers_test.go:344: "sp-pod" [bb409360-1aee-4fab-89ce-cb18d6657085] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bb409360-1aee-4fab-89ce-cb18d6657085] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003739767s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-789300 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.36s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh -n functional-789300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 cp functional-789300:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2044098987/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh -n functional-789300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh -n functional-789300 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-789300 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-wzq6j" [96f6a067-83d9-45d1-93a2-3a15502cdb1e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-wzq6j" [96f6a067-83d9-45d1-93a2-3a15502cdb1e] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.002895113s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-789300 exec mysql-58ccfd96bb-wzq6j -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-789300 exec mysql-58ccfd96bb-wzq6j -- mysql -ppassword -e "show databases;": exit status 1 (238.769864ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0403 18:24:29.497380   21552 retry.go:31] will retry after 968.592565ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-789300 exec mysql-58ccfd96bb-wzq6j -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-789300 exec mysql-58ccfd96bb-wzq6j -- mysql -ppassword -e "show databases;": exit status 1 (324.439256ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0403 18:24:30.791323   21552 retry.go:31] will retry after 1.647823751s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-789300 exec mysql-58ccfd96bb-wzq6j -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/21552/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "sudo cat /etc/test/nested/copy/21552/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/21552.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "sudo cat /etc/ssl/certs/21552.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/21552.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "sudo cat /usr/share/ca-certificates/21552.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/215522.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "sudo cat /etc/ssl/certs/215522.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/215522.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "sudo cat /usr/share/ca-certificates/215522.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-789300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789300 ssh "sudo systemctl is-active docker": exit status 1 (225.927629ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789300 ssh "sudo systemctl is-active containerd": exit status 1 (200.392686ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image ls --format short --alsologtostderr
E0403 18:24:35.049723   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-789300 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-789300
localhost/kicbase/echo-server:functional-789300
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-789300 image ls --format short --alsologtostderr:
I0403 18:24:35.042725   30590 out.go:345] Setting OutFile to fd 1 ...
I0403 18:24:35.043004   30590 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:24:35.043013   30590 out.go:358] Setting ErrFile to fd 2...
I0403 18:24:35.043017   30590 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:24:35.043169   30590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
I0403 18:24:35.043764   30590 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0403 18:24:35.043902   30590 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0403 18:24:35.044368   30590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0403 18:24:35.044427   30590 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:24:35.059660   30590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
I0403 18:24:35.060232   30590 main.go:141] libmachine: () Calling .GetVersion
I0403 18:24:35.060840   30590 main.go:141] libmachine: Using API Version  1
I0403 18:24:35.060867   30590 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:24:35.061300   30590 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:24:35.061452   30590 main.go:141] libmachine: (functional-789300) Calling .GetState
I0403 18:24:35.063208   30590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0403 18:24:35.063240   30590 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:24:35.078058   30590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46619
I0403 18:24:35.078541   30590 main.go:141] libmachine: () Calling .GetVersion
I0403 18:24:35.078999   30590 main.go:141] libmachine: Using API Version  1
I0403 18:24:35.079026   30590 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:24:35.079349   30590 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:24:35.079512   30590 main.go:141] libmachine: (functional-789300) Calling .DriverName
I0403 18:24:35.079704   30590 ssh_runner.go:195] Run: systemctl --version
I0403 18:24:35.079733   30590 main.go:141] libmachine: (functional-789300) Calling .GetSSHHostname
I0403 18:24:35.082260   30590 main.go:141] libmachine: (functional-789300) DBG | domain functional-789300 has defined MAC address 52:54:00:f0:75:73 in network mk-functional-789300
I0403 18:24:35.082680   30590 main.go:141] libmachine: (functional-789300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:75:73", ip: ""} in network mk-functional-789300: {Iface:virbr1 ExpiryTime:2025-04-03 19:21:03 +0000 UTC Type:0 Mac:52:54:00:f0:75:73 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-789300 Clientid:01:52:54:00:f0:75:73}
I0403 18:24:35.082718   30590 main.go:141] libmachine: (functional-789300) DBG | domain functional-789300 has defined IP address 192.168.39.170 and MAC address 52:54:00:f0:75:73 in network mk-functional-789300
I0403 18:24:35.082873   30590 main.go:141] libmachine: (functional-789300) Calling .GetSSHPort
I0403 18:24:35.083045   30590 main.go:141] libmachine: (functional-789300) Calling .GetSSHKeyPath
I0403 18:24:35.083185   30590 main.go:141] libmachine: (functional-789300) Calling .GetSSHUsername
I0403 18:24:35.083319   30590 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/functional-789300/id_rsa Username:docker}
I0403 18:24:35.165295   30590 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 18:24:35.206645   30590 main.go:141] libmachine: Making call to close driver server
I0403 18:24:35.206662   30590 main.go:141] libmachine: (functional-789300) Calling .Close
I0403 18:24:35.206957   30590 main.go:141] libmachine: (functional-789300) DBG | Closing plugin on server side
I0403 18:24:35.207034   30590 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:24:35.207064   30590 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:24:35.207082   30590 main.go:141] libmachine: Making call to close driver server
I0403 18:24:35.207094   30590 main.go:141] libmachine: (functional-789300) Calling .Close
I0403 18:24:35.207306   30590 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:24:35.207319   30590 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image ls --format table --alsologtostderr
E0403 18:24:39.535217   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-789300 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-789300  | 73be332fc1ea7 | 3.33kB |
| localhost/my-image                      | functional-789300  | 44a3f6041ec1c | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| docker.io/library/nginx                 | latest             | 53a18edff8091 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-789300  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-789300 image ls --format table --alsologtostderr:
I0403 18:24:39.553063   30764 out.go:345] Setting OutFile to fd 1 ...
I0403 18:24:39.553314   30764 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:24:39.553324   30764 out.go:358] Setting ErrFile to fd 2...
I0403 18:24:39.553328   30764 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:24:39.553560   30764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
I0403 18:24:39.554132   30764 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0403 18:24:39.554245   30764 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0403 18:24:39.554586   30764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0403 18:24:39.554663   30764 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:24:39.569283   30764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32911
I0403 18:24:39.569770   30764 main.go:141] libmachine: () Calling .GetVersion
I0403 18:24:39.570356   30764 main.go:141] libmachine: Using API Version  1
I0403 18:24:39.570385   30764 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:24:39.570763   30764 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:24:39.570985   30764 main.go:141] libmachine: (functional-789300) Calling .GetState
I0403 18:24:39.573073   30764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0403 18:24:39.573142   30764 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:24:39.588403   30764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
I0403 18:24:39.588886   30764 main.go:141] libmachine: () Calling .GetVersion
I0403 18:24:39.589286   30764 main.go:141] libmachine: Using API Version  1
I0403 18:24:39.589310   30764 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:24:39.589739   30764 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:24:39.589904   30764 main.go:141] libmachine: (functional-789300) Calling .DriverName
I0403 18:24:39.590134   30764 ssh_runner.go:195] Run: systemctl --version
I0403 18:24:39.590156   30764 main.go:141] libmachine: (functional-789300) Calling .GetSSHHostname
I0403 18:24:39.593125   30764 main.go:141] libmachine: (functional-789300) DBG | domain functional-789300 has defined MAC address 52:54:00:f0:75:73 in network mk-functional-789300
I0403 18:24:39.593554   30764 main.go:141] libmachine: (functional-789300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:75:73", ip: ""} in network mk-functional-789300: {Iface:virbr1 ExpiryTime:2025-04-03 19:21:03 +0000 UTC Type:0 Mac:52:54:00:f0:75:73 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-789300 Clientid:01:52:54:00:f0:75:73}
I0403 18:24:39.593581   30764 main.go:141] libmachine: (functional-789300) DBG | domain functional-789300 has defined IP address 192.168.39.170 and MAC address 52:54:00:f0:75:73 in network mk-functional-789300
I0403 18:24:39.593760   30764 main.go:141] libmachine: (functional-789300) Calling .GetSSHPort
I0403 18:24:39.593938   30764 main.go:141] libmachine: (functional-789300) Calling .GetSSHKeyPath
I0403 18:24:39.594099   30764 main.go:141] libmachine: (functional-789300) Calling .GetSSHUsername
I0403 18:24:39.594237   30764 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/functional-789300/id_rsa Username:docker}
I0403 18:24:39.679665   30764 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 18:24:39.726550   30764 main.go:141] libmachine: Making call to close driver server
I0403 18:24:39.726564   30764 main.go:141] libmachine: (functional-789300) Calling .Close
I0403 18:24:39.726835   30764 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:24:39.726897   30764 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:24:39.726910   30764 main.go:141] libmachine: Making call to close driver server
I0403 18:24:39.726917   30764 main.go:141] libmachine: (functional-789300) Calling .Close
I0403 18:24:39.726874   30764 main.go:141] libmachine: (functional-789300) DBG | Closing plugin on server side
I0403 18:24:39.727192   30764 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:24:39.727207   30764 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-789300 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-789300"],"size":"4943877"},{"id":"44a3f6041ec1c98324f879956c112d1b96e16169a178ac26f81e85fc51dbaf78","repoDigests":["localhost/my-image@sha256:b552477d7d97cb01540ab26cd0178eef2f853b63ddfc83b1606bd16e3f293c67"],"repoTags":["localhost/my-image:functional-789300"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"
686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0","repoDigests":["docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19","docker.io/library/nginx@sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4"],"repoTags":["docker.io/library/nginx:latest"],"size":"196159380"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d4
8bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":[
"registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"}
,{"id":"fdc9bb9738c6d0b4dd662f8b5207d26ed25570c0bcd4b0fe0eee30ee87ade541","repoDigests":["docker.io/library/6f1546ccad44c6f845a617852cda366260a418eee503d7ad21544d1e6400226b-tmp@sha256:2625860197b2b9b4025c13163cba8c96bb2e5336d15cc433ed8773b99fcf5eb2"],"repoTags":[],"size":"1466018"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"f1332858868e1c6a90
5123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73be332fc1ea7d4b1a5ae2e4110227940c0bbbf0c630631b4120604314829686","repoDigests":["localhost/minikube-local-cache-test@sha256:a0e99a4cbdb0982c6e20b63a48113676e9fa4e72a0171f99a3122b5a41f6489d"],"repoTags":["localhost/minikube-local-cache-test:functional-789300"],"size":"3330"},{"id":"51073
33e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-789300 image ls --format json --alsologtostderr:
I0403 18:24:39.344276   30740 out.go:345] Setting OutFile to fd 1 ...
I0403 18:24:39.344542   30740 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:24:39.344547   30740 out.go:358] Setting ErrFile to fd 2...
I0403 18:24:39.344552   30740 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:24:39.344827   30740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
I0403 18:24:39.345627   30740 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0403 18:24:39.345754   30740 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0403 18:24:39.346405   30740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0403 18:24:39.346499   30740 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:24:39.362591   30740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36743
I0403 18:24:39.363179   30740 main.go:141] libmachine: () Calling .GetVersion
I0403 18:24:39.363796   30740 main.go:141] libmachine: Using API Version  1
I0403 18:24:39.363819   30740 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:24:39.364132   30740 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:24:39.364324   30740 main.go:141] libmachine: (functional-789300) Calling .GetState
I0403 18:24:39.366430   30740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0403 18:24:39.366502   30740 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:24:39.381697   30740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
I0403 18:24:39.382106   30740 main.go:141] libmachine: () Calling .GetVersion
I0403 18:24:39.382501   30740 main.go:141] libmachine: Using API Version  1
I0403 18:24:39.382523   30740 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:24:39.382845   30740 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:24:39.383036   30740 main.go:141] libmachine: (functional-789300) Calling .DriverName
I0403 18:24:39.383199   30740 ssh_runner.go:195] Run: systemctl --version
I0403 18:24:39.383224   30740 main.go:141] libmachine: (functional-789300) Calling .GetSSHHostname
I0403 18:24:39.385552   30740 main.go:141] libmachine: (functional-789300) DBG | domain functional-789300 has defined MAC address 52:54:00:f0:75:73 in network mk-functional-789300
I0403 18:24:39.385910   30740 main.go:141] libmachine: (functional-789300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:75:73", ip: ""} in network mk-functional-789300: {Iface:virbr1 ExpiryTime:2025-04-03 19:21:03 +0000 UTC Type:0 Mac:52:54:00:f0:75:73 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-789300 Clientid:01:52:54:00:f0:75:73}
I0403 18:24:39.385939   30740 main.go:141] libmachine: (functional-789300) DBG | domain functional-789300 has defined IP address 192.168.39.170 and MAC address 52:54:00:f0:75:73 in network mk-functional-789300
I0403 18:24:39.386088   30740 main.go:141] libmachine: (functional-789300) Calling .GetSSHPort
I0403 18:24:39.386240   30740 main.go:141] libmachine: (functional-789300) Calling .GetSSHKeyPath
I0403 18:24:39.386359   30740 main.go:141] libmachine: (functional-789300) Calling .GetSSHUsername
I0403 18:24:39.386478   30740 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/functional-789300/id_rsa Username:docker}
I0403 18:24:39.465582   30740 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 18:24:39.503808   30740 main.go:141] libmachine: Making call to close driver server
I0403 18:24:39.503823   30740 main.go:141] libmachine: (functional-789300) Calling .Close
I0403 18:24:39.504112   30740 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:24:39.504131   30740 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:24:39.504140   30740 main.go:141] libmachine: Making call to close driver server
I0403 18:24:39.504147   30740 main.go:141] libmachine: (functional-789300) Calling .Close
I0403 18:24:39.504378   30740 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:24:39.504442   30740 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:24:39.504412   30740 main.go:141] libmachine: (functional-789300) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-789300 image ls --format yaml --alsologtostderr:
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0
repoDigests:
- docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19
- docker.io/library/nginx@sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4
repoTags:
- docker.io/library/nginx:latest
size: "196159380"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: 73be332fc1ea7d4b1a5ae2e4110227940c0bbbf0c630631b4120604314829686
repoDigests:
- localhost/minikube-local-cache-test@sha256:a0e99a4cbdb0982c6e20b63a48113676e9fa4e72a0171f99a3122b5a41f6489d
repoTags:
- localhost/minikube-local-cache-test:functional-789300
size: "3330"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-789300
size: "4943877"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-789300 image ls --format yaml --alsologtostderr:
I0403 18:24:35.257322   30613 out.go:345] Setting OutFile to fd 1 ...
I0403 18:24:35.257544   30613 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:24:35.257554   30613 out.go:358] Setting ErrFile to fd 2...
I0403 18:24:35.257558   30613 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:24:35.257742   30613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
I0403 18:24:35.258238   30613 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0403 18:24:35.258363   30613 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0403 18:24:35.258685   30613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0403 18:24:35.258746   30613 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:24:35.278011   30613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43557
I0403 18:24:35.278500   30613 main.go:141] libmachine: () Calling .GetVersion
I0403 18:24:35.279008   30613 main.go:141] libmachine: Using API Version  1
I0403 18:24:35.279032   30613 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:24:35.279422   30613 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:24:35.279618   30613 main.go:141] libmachine: (functional-789300) Calling .GetState
I0403 18:24:35.281361   30613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0403 18:24:35.281405   30613 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:24:35.295913   30613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
I0403 18:24:35.296343   30613 main.go:141] libmachine: () Calling .GetVersion
I0403 18:24:35.296786   30613 main.go:141] libmachine: Using API Version  1
I0403 18:24:35.296808   30613 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:24:35.297109   30613 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:24:35.297273   30613 main.go:141] libmachine: (functional-789300) Calling .DriverName
I0403 18:24:35.297448   30613 ssh_runner.go:195] Run: systemctl --version
I0403 18:24:35.297475   30613 main.go:141] libmachine: (functional-789300) Calling .GetSSHHostname
I0403 18:24:35.300238   30613 main.go:141] libmachine: (functional-789300) DBG | domain functional-789300 has defined MAC address 52:54:00:f0:75:73 in network mk-functional-789300
I0403 18:24:35.300609   30613 main.go:141] libmachine: (functional-789300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:75:73", ip: ""} in network mk-functional-789300: {Iface:virbr1 ExpiryTime:2025-04-03 19:21:03 +0000 UTC Type:0 Mac:52:54:00:f0:75:73 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-789300 Clientid:01:52:54:00:f0:75:73}
I0403 18:24:35.300630   30613 main.go:141] libmachine: (functional-789300) DBG | domain functional-789300 has defined IP address 192.168.39.170 and MAC address 52:54:00:f0:75:73 in network mk-functional-789300
I0403 18:24:35.300749   30613 main.go:141] libmachine: (functional-789300) Calling .GetSSHPort
I0403 18:24:35.300887   30613 main.go:141] libmachine: (functional-789300) Calling .GetSSHKeyPath
I0403 18:24:35.301031   30613 main.go:141] libmachine: (functional-789300) Calling .GetSSHUsername
I0403 18:24:35.301173   30613 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/functional-789300/id_rsa Username:docker}
I0403 18:24:35.397469   30613 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 18:24:35.443523   30613 main.go:141] libmachine: Making call to close driver server
I0403 18:24:35.443539   30613 main.go:141] libmachine: (functional-789300) Calling .Close
I0403 18:24:35.443805   30613 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:24:35.443819   30613 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:24:35.443841   30613 main.go:141] libmachine: Making call to close driver server
I0403 18:24:35.443852   30613 main.go:141] libmachine: (functional-789300) Calling .Close
I0403 18:24:35.444055   30613 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:24:35.444068   30613 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789300 ssh pgrep buildkitd: exit status 1 (188.279371ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image build -t localhost/my-image:functional-789300 testdata/build --alsologtostderr
E0403 18:24:35.691871   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:24:36.973525   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 image build -t localhost/my-image:functional-789300 testdata/build --alsologtostderr: (3.460153535s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-789300 image build -t localhost/my-image:functional-789300 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fdc9bb9738c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-789300
--> 44a3f6041ec
Successfully tagged localhost/my-image:functional-789300
44a3f6041ec1c98324f879956c112d1b96e16169a178ac26f81e85fc51dbaf78
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-789300 image build -t localhost/my-image:functional-789300 testdata/build --alsologtostderr:
I0403 18:24:35.677580   30667 out.go:345] Setting OutFile to fd 1 ...
I0403 18:24:35.677853   30667 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:24:35.677864   30667 out.go:358] Setting ErrFile to fd 2...
I0403 18:24:35.677869   30667 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:24:35.678099   30667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
I0403 18:24:35.678569   30667 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0403 18:24:35.679176   30667 config.go:182] Loaded profile config "functional-789300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0403 18:24:35.679514   30667 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0403 18:24:35.679560   30667 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:24:35.694851   30667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39157
I0403 18:24:35.695291   30667 main.go:141] libmachine: () Calling .GetVersion
I0403 18:24:35.695778   30667 main.go:141] libmachine: Using API Version  1
I0403 18:24:35.695800   30667 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:24:35.696178   30667 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:24:35.696456   30667 main.go:141] libmachine: (functional-789300) Calling .GetState
I0403 18:24:35.699784   30667 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0403 18:24:35.699836   30667 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:24:35.714917   30667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34205
I0403 18:24:35.715352   30667 main.go:141] libmachine: () Calling .GetVersion
I0403 18:24:35.715784   30667 main.go:141] libmachine: Using API Version  1
I0403 18:24:35.715807   30667 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:24:35.716114   30667 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:24:35.716303   30667 main.go:141] libmachine: (functional-789300) Calling .DriverName
I0403 18:24:35.716481   30667 ssh_runner.go:195] Run: systemctl --version
I0403 18:24:35.716505   30667 main.go:141] libmachine: (functional-789300) Calling .GetSSHHostname
I0403 18:24:35.719204   30667 main.go:141] libmachine: (functional-789300) DBG | domain functional-789300 has defined MAC address 52:54:00:f0:75:73 in network mk-functional-789300
I0403 18:24:35.719614   30667 main.go:141] libmachine: (functional-789300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:75:73", ip: ""} in network mk-functional-789300: {Iface:virbr1 ExpiryTime:2025-04-03 19:21:03 +0000 UTC Type:0 Mac:52:54:00:f0:75:73 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-789300 Clientid:01:52:54:00:f0:75:73}
I0403 18:24:35.719637   30667 main.go:141] libmachine: (functional-789300) DBG | domain functional-789300 has defined IP address 192.168.39.170 and MAC address 52:54:00:f0:75:73 in network mk-functional-789300
I0403 18:24:35.719806   30667 main.go:141] libmachine: (functional-789300) Calling .GetSSHPort
I0403 18:24:35.719959   30667 main.go:141] libmachine: (functional-789300) Calling .GetSSHKeyPath
I0403 18:24:35.720102   30667 main.go:141] libmachine: (functional-789300) Calling .GetSSHUsername
I0403 18:24:35.720234   30667 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/functional-789300/id_rsa Username:docker}
I0403 18:24:35.805655   30667 build_images.go:161] Building image from path: /tmp/build.112195329.tar
I0403 18:24:35.805708   30667 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0403 18:24:35.821462   30667 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.112195329.tar
I0403 18:24:35.827425   30667 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.112195329.tar: stat -c "%s %y" /var/lib/minikube/build/build.112195329.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.112195329.tar': No such file or directory
I0403 18:24:35.827465   30667 ssh_runner.go:362] scp /tmp/build.112195329.tar --> /var/lib/minikube/build/build.112195329.tar (3072 bytes)
I0403 18:24:35.864858   30667 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.112195329
I0403 18:24:35.876785   30667 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.112195329 -xf /var/lib/minikube/build/build.112195329.tar
I0403 18:24:35.887196   30667 crio.go:315] Building image: /var/lib/minikube/build/build.112195329
I0403 18:24:35.887269   30667 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-789300 /var/lib/minikube/build/build.112195329 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0403 18:24:39.070076   30667 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-789300 /var/lib/minikube/build/build.112195329 --cgroup-manager=cgroupfs: (3.182778958s)
I0403 18:24:39.070138   30667 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.112195329
I0403 18:24:39.083296   30667 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.112195329.tar
I0403 18:24:39.092142   30667 build_images.go:217] Built localhost/my-image:functional-789300 from /tmp/build.112195329.tar
I0403 18:24:39.092170   30667 build_images.go:133] succeeded building to: functional-789300
I0403 18:24:39.092180   30667 build_images.go:134] failed building to: 
I0403 18:24:39.092226   30667 main.go:141] libmachine: Making call to close driver server
I0403 18:24:39.092236   30667 main.go:141] libmachine: (functional-789300) Calling .Close
I0403 18:24:39.092498   30667 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:24:39.092512   30667 main.go:141] libmachine: (functional-789300) DBG | Closing plugin on server side
I0403 18:24:39.092516   30667 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:24:39.092531   30667 main.go:141] libmachine: Making call to close driver server
I0403 18:24:39.092538   30667 main.go:141] libmachine: (functional-789300) Calling .Close
I0403 18:24:39.092735   30667 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:24:39.092750   30667 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:24:39.092761   30667 main.go:141] libmachine: (functional-789300) DBG | Closing plugin on server side
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.838751332s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-789300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image load --daemon kicbase/echo-server:functional-789300 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 image load --daemon kicbase/echo-server:functional-789300 --alsologtostderr: (1.1591509s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "258.063852ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "45.636779ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "282.081068ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "45.954536ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image load --daemon kicbase/echo-server:functional-789300 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-789300
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image load --daemon kicbase/echo-server:functional-789300 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 image load --daemon kicbase/echo-server:functional-789300 --alsologtostderr: (3.430689389s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image save kicbase/echo-server:functional-789300 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:397: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 image save kicbase/echo-server:functional-789300 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.818567779s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image rm kicbase/echo-server:functional-789300 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.028822718s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-789300
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 image save --daemon kicbase/echo-server:functional-789300 --alsologtostderr
functional_test.go:441: (dbg) Done: out/minikube-linux-amd64 -p functional-789300 image save --daemon kicbase/echo-server:functional-789300 --alsologtostderr: (1.417640559s)
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-789300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-789300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-789300 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-fhp46" [390d980c-ceb9-434b-b66f-ebd81d8d2005] Pending
helpers_test.go:344: "hello-node-fcfd88b6f-fhp46" [390d980c-ceb9-434b-b66f-ebd81d8d2005] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004855763s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-789300 /tmp/TestFunctionalparallelMountCmdany-port1658650628/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1743704672789063485" to /tmp/TestFunctionalparallelMountCmdany-port1658650628/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1743704672789063485" to /tmp/TestFunctionalparallelMountCmdany-port1658650628/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1743704672789063485" to /tmp/TestFunctionalparallelMountCmdany-port1658650628/001/test-1743704672789063485
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789300 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (217.487519ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0403 18:24:33.006846   21552 retry.go:31] will retry after 252.16081ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  3 18:24 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  3 18:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  3 18:24 test-1743704672789063485
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh cat /mount-9p/test-1743704672789063485
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-789300 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9b1aeb36-de9d-422f-a458-405c39c610d3] Pending
helpers_test.go:344: "busybox-mount" [9b1aeb36-de9d-422f-a458-405c39c610d3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9b1aeb36-de9d-422f-a458-405c39c610d3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9b1aeb36-de9d-422f-a458-405c39c610d3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004160343s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-789300 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789300 /tmp/TestFunctionalparallelMountCmdany-port1658650628/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 service list -o json
functional_test.go:1511: Took "416.184258ms" to run "out/minikube-linux-amd64 -p functional-789300 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.170:31310
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.170:31310
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-789300 /tmp/TestFunctionalparallelMountCmdspecific-port1236649823/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789300 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (262.504624ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0403 18:24:42.341743   21552 retry.go:31] will retry after 342.859901ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789300 /tmp/TestFunctionalparallelMountCmdspecific-port1236649823/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789300 ssh "sudo umount -f /mount-9p": exit status 1 (247.006189ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-789300 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789300 /tmp/TestFunctionalparallelMountCmdspecific-port1236649823/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-789300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup93899920/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-789300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup93899920/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-789300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup93899920/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-789300 ssh "findmnt -T" /mount1: exit status 1 (265.427321ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0403 18:24:44.088823   21552 retry.go:31] will retry after 524.218159ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "findmnt -T" /mount1
E0403 18:24:44.657357   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-789300 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-789300 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup93899920/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup93899920/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-789300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup93899920/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2025/04/03 18:24:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-789300
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-789300
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-789300
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-531280 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0403 18:25:15.381270   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:25:56.343482   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:27:18.266879   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-531280 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.598070167s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-531280 -- rollout status deployment/busybox: (4.716868428s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-5blzg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-7bkfz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-8hllb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-5blzg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-7bkfz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-8hllb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-5blzg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-7bkfz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-8hllb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-5blzg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-5blzg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-7bkfz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-7bkfz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-8hllb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-531280 -- exec busybox-58667487b6-8hllb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-531280 -v=7 --alsologtostderr
E0403 18:29:09.256356   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:29:09.262735   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:29:09.274067   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:29:09.295425   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:29:09.336805   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:29:09.418224   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:29:09.579725   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:29:09.901360   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:29:10.543132   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:29:11.824579   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:29:14.386533   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-531280 -v=7 --alsologtostderr: (57.891243879s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-531280 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 status --output json -v=7 --alsologtostderr
E0403 18:29:19.508473   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp testdata/cp-test.txt ha-531280:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037999209/001/cp-test_ha-531280.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280:/home/docker/cp-test.txt ha-531280-m02:/home/docker/cp-test_ha-531280_ha-531280-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m02 "sudo cat /home/docker/cp-test_ha-531280_ha-531280-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280:/home/docker/cp-test.txt ha-531280-m03:/home/docker/cp-test_ha-531280_ha-531280-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m03 "sudo cat /home/docker/cp-test_ha-531280_ha-531280-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280:/home/docker/cp-test.txt ha-531280-m04:/home/docker/cp-test_ha-531280_ha-531280-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m04 "sudo cat /home/docker/cp-test_ha-531280_ha-531280-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp testdata/cp-test.txt ha-531280-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037999209/001/cp-test_ha-531280-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m02:/home/docker/cp-test.txt ha-531280:/home/docker/cp-test_ha-531280-m02_ha-531280.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280 "sudo cat /home/docker/cp-test_ha-531280-m02_ha-531280.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m02:/home/docker/cp-test.txt ha-531280-m03:/home/docker/cp-test_ha-531280-m02_ha-531280-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m03 "sudo cat /home/docker/cp-test_ha-531280-m02_ha-531280-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m02:/home/docker/cp-test.txt ha-531280-m04:/home/docker/cp-test_ha-531280-m02_ha-531280-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m04 "sudo cat /home/docker/cp-test_ha-531280-m02_ha-531280-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp testdata/cp-test.txt ha-531280-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037999209/001/cp-test_ha-531280-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m03:/home/docker/cp-test.txt ha-531280:/home/docker/cp-test_ha-531280-m03_ha-531280.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280 "sudo cat /home/docker/cp-test_ha-531280-m03_ha-531280.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m03:/home/docker/cp-test.txt ha-531280-m02:/home/docker/cp-test_ha-531280-m03_ha-531280-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m02 "sudo cat /home/docker/cp-test_ha-531280-m03_ha-531280-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m03:/home/docker/cp-test.txt ha-531280-m04:/home/docker/cp-test_ha-531280-m03_ha-531280-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m04 "sudo cat /home/docker/cp-test_ha-531280-m03_ha-531280-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp testdata/cp-test.txt ha-531280-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3037999209/001/cp-test_ha-531280-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m04:/home/docker/cp-test.txt ha-531280:/home/docker/cp-test_ha-531280-m04_ha-531280.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280 "sudo cat /home/docker/cp-test_ha-531280-m04_ha-531280.txt"
E0403 18:29:29.750104   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m04:/home/docker/cp-test.txt ha-531280-m02:/home/docker/cp-test_ha-531280-m04_ha-531280-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m02 "sudo cat /home/docker/cp-test_ha-531280-m04_ha-531280-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 cp ha-531280-m04:/home/docker/cp-test.txt ha-531280-m03:/home/docker/cp-test_ha-531280-m04_ha-531280-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 ssh -n ha-531280-m03 "sudo cat /home/docker/cp-test_ha-531280-m04_ha-531280-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 node stop m02 -v=7 --alsologtostderr
E0403 18:29:34.401964   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:29:50.232033   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:02.108885   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:31.193764   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-531280 node stop m02 -v=7 --alsologtostderr: (1m30.967566466s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-531280 status -v=7 --alsologtostderr: exit status 7 (615.449569ms)

                                                
                                                
-- stdout --
	ha-531280
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-531280-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-531280-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-531280-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 18:31:02.295893   35931 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:31:02.295982   35931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:31:02.295990   35931 out.go:358] Setting ErrFile to fd 2...
	I0403 18:31:02.295994   35931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:31:02.296190   35931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 18:31:02.296340   35931 out.go:352] Setting JSON to false
	I0403 18:31:02.296367   35931 mustload.go:65] Loading cluster: ha-531280
	I0403 18:31:02.296414   35931 notify.go:220] Checking for updates...
	I0403 18:31:02.296907   35931 config.go:182] Loaded profile config "ha-531280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 18:31:02.296935   35931 status.go:174] checking status of ha-531280 ...
	I0403 18:31:02.297391   35931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:31:02.297431   35931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:31:02.316076   35931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34217
	I0403 18:31:02.316527   35931 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:31:02.317036   35931 main.go:141] libmachine: Using API Version  1
	I0403 18:31:02.317062   35931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:31:02.317537   35931 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:31:02.317687   35931 main.go:141] libmachine: (ha-531280) Calling .GetState
	I0403 18:31:02.319287   35931 status.go:371] ha-531280 host status = "Running" (err=<nil>)
	I0403 18:31:02.319306   35931 host.go:66] Checking if "ha-531280" exists ...
	I0403 18:31:02.319594   35931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:31:02.319634   35931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:31:02.334329   35931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I0403 18:31:02.334731   35931 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:31:02.335181   35931 main.go:141] libmachine: Using API Version  1
	I0403 18:31:02.335217   35931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:31:02.335547   35931 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:31:02.335721   35931 main.go:141] libmachine: (ha-531280) Calling .GetIP
	I0403 18:31:02.338511   35931 main.go:141] libmachine: (ha-531280) DBG | domain ha-531280 has defined MAC address 52:54:00:7d:f4:1f in network mk-ha-531280
	I0403 18:31:02.338954   35931 main.go:141] libmachine: (ha-531280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f4:1f", ip: ""} in network mk-ha-531280: {Iface:virbr1 ExpiryTime:2025-04-03 19:25:10 +0000 UTC Type:0 Mac:52:54:00:7d:f4:1f Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-531280 Clientid:01:52:54:00:7d:f4:1f}
	I0403 18:31:02.338979   35931 main.go:141] libmachine: (ha-531280) DBG | domain ha-531280 has defined IP address 192.168.39.228 and MAC address 52:54:00:7d:f4:1f in network mk-ha-531280
	I0403 18:31:02.339157   35931 host.go:66] Checking if "ha-531280" exists ...
	I0403 18:31:02.339481   35931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:31:02.339520   35931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:31:02.353706   35931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46225
	I0403 18:31:02.354138   35931 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:31:02.354561   35931 main.go:141] libmachine: Using API Version  1
	I0403 18:31:02.354579   35931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:31:02.354915   35931 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:31:02.355085   35931 main.go:141] libmachine: (ha-531280) Calling .DriverName
	I0403 18:31:02.355269   35931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0403 18:31:02.355305   35931 main.go:141] libmachine: (ha-531280) Calling .GetSSHHostname
	I0403 18:31:02.357793   35931 main.go:141] libmachine: (ha-531280) DBG | domain ha-531280 has defined MAC address 52:54:00:7d:f4:1f in network mk-ha-531280
	I0403 18:31:02.358214   35931 main.go:141] libmachine: (ha-531280) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f4:1f", ip: ""} in network mk-ha-531280: {Iface:virbr1 ExpiryTime:2025-04-03 19:25:10 +0000 UTC Type:0 Mac:52:54:00:7d:f4:1f Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-531280 Clientid:01:52:54:00:7d:f4:1f}
	I0403 18:31:02.358247   35931 main.go:141] libmachine: (ha-531280) DBG | domain ha-531280 has defined IP address 192.168.39.228 and MAC address 52:54:00:7d:f4:1f in network mk-ha-531280
	I0403 18:31:02.358352   35931 main.go:141] libmachine: (ha-531280) Calling .GetSSHPort
	I0403 18:31:02.358509   35931 main.go:141] libmachine: (ha-531280) Calling .GetSSHKeyPath
	I0403 18:31:02.358642   35931 main.go:141] libmachine: (ha-531280) Calling .GetSSHUsername
	I0403 18:31:02.358776   35931 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/ha-531280/id_rsa Username:docker}
	I0403 18:31:02.443216   35931 ssh_runner.go:195] Run: systemctl --version
	I0403 18:31:02.450138   35931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 18:31:02.464905   35931 kubeconfig.go:125] found "ha-531280" server: "https://192.168.39.254:8443"
	I0403 18:31:02.464934   35931 api_server.go:166] Checking apiserver status ...
	I0403 18:31:02.464961   35931 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 18:31:02.479141   35931 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup
	W0403 18:31:02.488745   35931 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0403 18:31:02.488794   35931 ssh_runner.go:195] Run: ls
	I0403 18:31:02.493045   35931 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0403 18:31:02.497189   35931 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0403 18:31:02.497209   35931 status.go:463] ha-531280 apiserver status = Running (err=<nil>)
	I0403 18:31:02.497221   35931 status.go:176] ha-531280 status: &{Name:ha-531280 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:31:02.497241   35931 status.go:174] checking status of ha-531280-m02 ...
	I0403 18:31:02.497542   35931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:31:02.497585   35931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:31:02.513630   35931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41205
	I0403 18:31:02.514066   35931 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:31:02.514511   35931 main.go:141] libmachine: Using API Version  1
	I0403 18:31:02.514533   35931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:31:02.514791   35931 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:31:02.514972   35931 main.go:141] libmachine: (ha-531280-m02) Calling .GetState
	I0403 18:31:02.516305   35931 status.go:371] ha-531280-m02 host status = "Stopped" (err=<nil>)
	I0403 18:31:02.516321   35931 status.go:384] host is not running, skipping remaining checks
	I0403 18:31:02.516327   35931 status.go:176] ha-531280-m02 status: &{Name:ha-531280-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:31:02.516344   35931 status.go:174] checking status of ha-531280-m03 ...
	I0403 18:31:02.516600   35931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:31:02.516632   35931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:31:02.530508   35931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I0403 18:31:02.530919   35931 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:31:02.531323   35931 main.go:141] libmachine: Using API Version  1
	I0403 18:31:02.531343   35931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:31:02.531611   35931 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:31:02.531757   35931 main.go:141] libmachine: (ha-531280-m03) Calling .GetState
	I0403 18:31:02.533040   35931 status.go:371] ha-531280-m03 host status = "Running" (err=<nil>)
	I0403 18:31:02.533057   35931 host.go:66] Checking if "ha-531280-m03" exists ...
	I0403 18:31:02.533352   35931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:31:02.533381   35931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:31:02.547004   35931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I0403 18:31:02.547374   35931 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:31:02.547763   35931 main.go:141] libmachine: Using API Version  1
	I0403 18:31:02.547782   35931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:31:02.548076   35931 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:31:02.548244   35931 main.go:141] libmachine: (ha-531280-m03) Calling .GetIP
	I0403 18:31:02.550884   35931 main.go:141] libmachine: (ha-531280-m03) DBG | domain ha-531280-m03 has defined MAC address 52:54:00:d2:38:1a in network mk-ha-531280
	I0403 18:31:02.551295   35931 main.go:141] libmachine: (ha-531280-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:38:1a", ip: ""} in network mk-ha-531280: {Iface:virbr1 ExpiryTime:2025-04-03 19:27:10 +0000 UTC Type:0 Mac:52:54:00:d2:38:1a Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-531280-m03 Clientid:01:52:54:00:d2:38:1a}
	I0403 18:31:02.551332   35931 main.go:141] libmachine: (ha-531280-m03) DBG | domain ha-531280-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:d2:38:1a in network mk-ha-531280
	I0403 18:31:02.551457   35931 host.go:66] Checking if "ha-531280-m03" exists ...
	I0403 18:31:02.551852   35931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:31:02.551918   35931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:31:02.565644   35931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43285
	I0403 18:31:02.565994   35931 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:31:02.566381   35931 main.go:141] libmachine: Using API Version  1
	I0403 18:31:02.566409   35931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:31:02.566713   35931 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:31:02.566879   35931 main.go:141] libmachine: (ha-531280-m03) Calling .DriverName
	I0403 18:31:02.567060   35931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0403 18:31:02.567081   35931 main.go:141] libmachine: (ha-531280-m03) Calling .GetSSHHostname
	I0403 18:31:02.569722   35931 main.go:141] libmachine: (ha-531280-m03) DBG | domain ha-531280-m03 has defined MAC address 52:54:00:d2:38:1a in network mk-ha-531280
	I0403 18:31:02.570134   35931 main.go:141] libmachine: (ha-531280-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:38:1a", ip: ""} in network mk-ha-531280: {Iface:virbr1 ExpiryTime:2025-04-03 19:27:10 +0000 UTC Type:0 Mac:52:54:00:d2:38:1a Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-531280-m03 Clientid:01:52:54:00:d2:38:1a}
	I0403 18:31:02.570159   35931 main.go:141] libmachine: (ha-531280-m03) DBG | domain ha-531280-m03 has defined IP address 192.168.39.176 and MAC address 52:54:00:d2:38:1a in network mk-ha-531280
	I0403 18:31:02.570319   35931 main.go:141] libmachine: (ha-531280-m03) Calling .GetSSHPort
	I0403 18:31:02.570488   35931 main.go:141] libmachine: (ha-531280-m03) Calling .GetSSHKeyPath
	I0403 18:31:02.570631   35931 main.go:141] libmachine: (ha-531280-m03) Calling .GetSSHUsername
	I0403 18:31:02.570772   35931 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/ha-531280-m03/id_rsa Username:docker}
	I0403 18:31:02.662907   35931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 18:31:02.678107   35931 kubeconfig.go:125] found "ha-531280" server: "https://192.168.39.254:8443"
	I0403 18:31:02.678130   35931 api_server.go:166] Checking apiserver status ...
	I0403 18:31:02.678157   35931 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 18:31:02.691445   35931 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1471/cgroup
	W0403 18:31:02.701834   35931 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1471/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0403 18:31:02.701898   35931 ssh_runner.go:195] Run: ls
	I0403 18:31:02.706095   35931 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0403 18:31:02.710336   35931 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0403 18:31:02.710359   35931 status.go:463] ha-531280-m03 apiserver status = Running (err=<nil>)
	I0403 18:31:02.710369   35931 status.go:176] ha-531280-m03 status: &{Name:ha-531280-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:31:02.710386   35931 status.go:174] checking status of ha-531280-m04 ...
	I0403 18:31:02.710650   35931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:31:02.710680   35931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:31:02.725421   35931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34357
	I0403 18:31:02.725800   35931 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:31:02.726205   35931 main.go:141] libmachine: Using API Version  1
	I0403 18:31:02.726228   35931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:31:02.726552   35931 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:31:02.726737   35931 main.go:141] libmachine: (ha-531280-m04) Calling .GetState
	I0403 18:31:02.728207   35931 status.go:371] ha-531280-m04 host status = "Running" (err=<nil>)
	I0403 18:31:02.728221   35931 host.go:66] Checking if "ha-531280-m04" exists ...
	I0403 18:31:02.728484   35931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:31:02.728516   35931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:31:02.743087   35931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I0403 18:31:02.743480   35931 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:31:02.743907   35931 main.go:141] libmachine: Using API Version  1
	I0403 18:31:02.743926   35931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:31:02.744240   35931 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:31:02.744408   35931 main.go:141] libmachine: (ha-531280-m04) Calling .GetIP
	I0403 18:31:02.747193   35931 main.go:141] libmachine: (ha-531280-m04) DBG | domain ha-531280-m04 has defined MAC address 52:54:00:e2:a9:77 in network mk-ha-531280
	I0403 18:31:02.747647   35931 main.go:141] libmachine: (ha-531280-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a9:77", ip: ""} in network mk-ha-531280: {Iface:virbr1 ExpiryTime:2025-04-03 19:28:34 +0000 UTC Type:0 Mac:52:54:00:e2:a9:77 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:ha-531280-m04 Clientid:01:52:54:00:e2:a9:77}
	I0403 18:31:02.747668   35931 main.go:141] libmachine: (ha-531280-m04) DBG | domain ha-531280-m04 has defined IP address 192.168.39.124 and MAC address 52:54:00:e2:a9:77 in network mk-ha-531280
	I0403 18:31:02.747823   35931 host.go:66] Checking if "ha-531280-m04" exists ...
	I0403 18:31:02.748089   35931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:31:02.748125   35931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:31:02.762215   35931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38053
	I0403 18:31:02.762553   35931 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:31:02.762982   35931 main.go:141] libmachine: Using API Version  1
	I0403 18:31:02.763005   35931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:31:02.763300   35931 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:31:02.763455   35931 main.go:141] libmachine: (ha-531280-m04) Calling .DriverName
	I0403 18:31:02.763604   35931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0403 18:31:02.763622   35931 main.go:141] libmachine: (ha-531280-m04) Calling .GetSSHHostname
	I0403 18:31:02.765956   35931 main.go:141] libmachine: (ha-531280-m04) DBG | domain ha-531280-m04 has defined MAC address 52:54:00:e2:a9:77 in network mk-ha-531280
	I0403 18:31:02.766332   35931 main.go:141] libmachine: (ha-531280-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:a9:77", ip: ""} in network mk-ha-531280: {Iface:virbr1 ExpiryTime:2025-04-03 19:28:34 +0000 UTC Type:0 Mac:52:54:00:e2:a9:77 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:ha-531280-m04 Clientid:01:52:54:00:e2:a9:77}
	I0403 18:31:02.766353   35931 main.go:141] libmachine: (ha-531280-m04) DBG | domain ha-531280-m04 has defined IP address 192.168.39.124 and MAC address 52:54:00:e2:a9:77 in network mk-ha-531280
	I0403 18:31:02.766473   35931 main.go:141] libmachine: (ha-531280-m04) Calling .GetSSHPort
	I0403 18:31:02.766633   35931 main.go:141] libmachine: (ha-531280-m04) Calling .GetSSHKeyPath
	I0403 18:31:02.766747   35931 main.go:141] libmachine: (ha-531280-m04) Calling .GetSSHUsername
	I0403 18:31:02.766891   35931 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/ha-531280-m04/id_rsa Username:docker}
	I0403 18:31:02.854491   35931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 18:31:02.869373   35931 status.go:176] ha-531280-m04 status: &{Name:ha-531280-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 node start m02 -v=7 --alsologtostderr
E0403 18:31:53.115954   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-531280 node start m02 -v=7 --alsologtostderr: (51.8742682s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (52.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (427.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-531280 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-531280 -v=7 --alsologtostderr
E0403 18:34:09.256226   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:34:34.401987   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:34:36.958276   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-531280 -v=7 --alsologtostderr: (4m33.941508563s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-531280 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-531280 --wait=true -v=7 --alsologtostderr: (2m33.865545865s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-531280
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (427.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 node delete m03 -v=7 --alsologtostderr
E0403 18:39:09.256081   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-531280 node delete m03 -v=7 --alsologtostderr: (16.961143734s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 stop -v=7 --alsologtostderr
E0403 18:39:34.401199   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:40:57.470970   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-531280 stop -v=7 --alsologtostderr: (4m32.541327113s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-531280 status -v=7 --alsologtostderr: exit status 7 (96.502213ms)

                                                
                                                
-- stdout --
	ha-531280
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-531280-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-531280-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 18:43:55.827568   40196 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:43:55.828029   40196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:43:55.828079   40196 out.go:358] Setting ErrFile to fd 2...
	I0403 18:43:55.828096   40196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:43:55.828568   40196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 18:43:55.828886   40196 out.go:352] Setting JSON to false
	I0403 18:43:55.828967   40196 notify.go:220] Checking for updates...
	I0403 18:43:55.828969   40196 mustload.go:65] Loading cluster: ha-531280
	I0403 18:43:55.829559   40196 config.go:182] Loaded profile config "ha-531280": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 18:43:55.829579   40196 status.go:174] checking status of ha-531280 ...
	I0403 18:43:55.829944   40196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:43:55.829985   40196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:43:55.844878   40196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44063
	I0403 18:43:55.845346   40196 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:43:55.845809   40196 main.go:141] libmachine: Using API Version  1
	I0403 18:43:55.845829   40196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:43:55.846136   40196 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:43:55.846354   40196 main.go:141] libmachine: (ha-531280) Calling .GetState
	I0403 18:43:55.848033   40196 status.go:371] ha-531280 host status = "Stopped" (err=<nil>)
	I0403 18:43:55.848052   40196 status.go:384] host is not running, skipping remaining checks
	I0403 18:43:55.848057   40196 status.go:176] ha-531280 status: &{Name:ha-531280 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:43:55.848093   40196 status.go:174] checking status of ha-531280-m02 ...
	I0403 18:43:55.848377   40196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:43:55.848410   40196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:43:55.862985   40196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0403 18:43:55.863337   40196 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:43:55.863714   40196 main.go:141] libmachine: Using API Version  1
	I0403 18:43:55.863741   40196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:43:55.863999   40196 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:43:55.864169   40196 main.go:141] libmachine: (ha-531280-m02) Calling .GetState
	I0403 18:43:55.865577   40196 status.go:371] ha-531280-m02 host status = "Stopped" (err=<nil>)
	I0403 18:43:55.865590   40196 status.go:384] host is not running, skipping remaining checks
	I0403 18:43:55.865597   40196 status.go:176] ha-531280-m02 status: &{Name:ha-531280-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:43:55.865616   40196 status.go:174] checking status of ha-531280-m04 ...
	I0403 18:43:55.865902   40196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:43:55.865941   40196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:43:55.880145   40196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32947
	I0403 18:43:55.880518   40196 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:43:55.880935   40196 main.go:141] libmachine: Using API Version  1
	I0403 18:43:55.880963   40196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:43:55.881269   40196 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:43:55.881424   40196 main.go:141] libmachine: (ha-531280-m04) Calling .GetState
	I0403 18:43:55.882701   40196 status.go:371] ha-531280-m04 host status = "Stopped" (err=<nil>)
	I0403 18:43:55.882714   40196 status.go:384] host is not running, skipping remaining checks
	I0403 18:43:55.882720   40196 status.go:176] ha-531280-m04 status: &{Name:ha-531280-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (146.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-531280 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0403 18:44:09.255981   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:44:34.401305   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:45:32.319953   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-531280 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m26.211507712s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (146.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-531280 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-531280 --control-plane -v=7 --alsologtostderr: (1m16.039589664s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-531280 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (89.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-266047 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0403 18:49:09.260028   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-266047 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m29.554097735s)
--- PASS: TestJSONOutput/start/Command (89.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-266047 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-266047 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-266047 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-266047 --output=json --user=testUser: (7.344853104s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-726790 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-726790 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.572381ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fe902ebc-5bfa-44ae-9178-75668540dcdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-726790] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc814edc-ab27-4bdf-aede-8c8df4d72fb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20591"}}
	{"specversion":"1.0","id":"41c14aa0-6fa0-4f43-8ccb-df2862e6c1c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6698bf7d-ae25-4641-a02a-dccacf70f19e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig"}}
	{"specversion":"1.0","id":"b2a9210a-7ffb-4eda-9b48-65c4c4ec1fce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube"}}
	{"specversion":"1.0","id":"bcdb1c25-4a47-4624-9849-24f33c059985","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8b302a7b-5d69-4042-80e8-c7dd3d7344c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0a5ab3d7-f93e-485d-99b6-7133f7ca1bf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-726790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-726790
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (86.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-450571 --driver=kvm2  --container-runtime=crio
E0403 18:49:34.405628   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-450571 --driver=kvm2  --container-runtime=crio: (39.244873131s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-463119 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-463119 --driver=kvm2  --container-runtime=crio: (44.205962793s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-450571
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-463119
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-463119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-463119
helpers_test.go:175: Cleaning up "first-450571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-450571
--- PASS: TestMinikubeProfile (86.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-184063 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-184063 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.080097361s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-184063 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-184063 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-196520 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-196520 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.702151149s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-196520 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-196520 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-184063 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-196520 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-196520 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-196520
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-196520: (1.262934722s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-196520
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-196520: (22.198655803s)
--- PASS: TestMountStart/serial/RestartStopped (23.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-196520 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-196520 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-953539 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0403 18:54:09.255813   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-953539 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.832999036s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-953539 -- rollout status deployment/busybox: (4.653709226s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- exec busybox-58667487b6-4cwnn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- exec busybox-58667487b6-lm7hb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- exec busybox-58667487b6-4cwnn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- exec busybox-58667487b6-lm7hb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- exec busybox-58667487b6-4cwnn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- exec busybox-58667487b6-lm7hb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- exec busybox-58667487b6-4cwnn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- exec busybox-58667487b6-4cwnn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- exec busybox-58667487b6-lm7hb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-953539 -- exec busybox-58667487b6-lm7hb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-953539 -v 3 --alsologtostderr
E0403 18:54:34.403185   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-953539 -v 3 --alsologtostderr: (46.816629595s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-953539 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp testdata/cp-test.txt multinode-953539:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp multinode-953539:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2378693842/001/cp-test_multinode-953539.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp multinode-953539:/home/docker/cp-test.txt multinode-953539-m02:/home/docker/cp-test_multinode-953539_multinode-953539-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m02 "sudo cat /home/docker/cp-test_multinode-953539_multinode-953539-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp multinode-953539:/home/docker/cp-test.txt multinode-953539-m03:/home/docker/cp-test_multinode-953539_multinode-953539-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m03 "sudo cat /home/docker/cp-test_multinode-953539_multinode-953539-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp testdata/cp-test.txt multinode-953539-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp multinode-953539-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2378693842/001/cp-test_multinode-953539-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp multinode-953539-m02:/home/docker/cp-test.txt multinode-953539:/home/docker/cp-test_multinode-953539-m02_multinode-953539.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539 "sudo cat /home/docker/cp-test_multinode-953539-m02_multinode-953539.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp multinode-953539-m02:/home/docker/cp-test.txt multinode-953539-m03:/home/docker/cp-test_multinode-953539-m02_multinode-953539-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m03 "sudo cat /home/docker/cp-test_multinode-953539-m02_multinode-953539-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp testdata/cp-test.txt multinode-953539-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp multinode-953539-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2378693842/001/cp-test_multinode-953539-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp multinode-953539-m03:/home/docker/cp-test.txt multinode-953539:/home/docker/cp-test_multinode-953539-m03_multinode-953539.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539 "sudo cat /home/docker/cp-test_multinode-953539-m03_multinode-953539.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 cp multinode-953539-m03:/home/docker/cp-test.txt multinode-953539-m02:/home/docker/cp-test_multinode-953539-m03_multinode-953539-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 ssh -n multinode-953539-m02 "sudo cat /home/docker/cp-test_multinode-953539-m03_multinode-953539-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-953539 node stop m03: (1.400127034s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-953539 status: exit status 7 (400.508265ms)

                                                
                                                
-- stdout --
	multinode-953539
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-953539-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-953539-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-953539 status --alsologtostderr: exit status 7 (404.774409ms)

                                                
                                                
-- stdout --
	multinode-953539
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-953539-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-953539-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 18:55:15.755290   48054 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:55:15.755375   48054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:55:15.755382   48054 out.go:358] Setting ErrFile to fd 2...
	I0403 18:55:15.755386   48054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:55:15.755534   48054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 18:55:15.755661   48054 out.go:352] Setting JSON to false
	I0403 18:55:15.755688   48054 mustload.go:65] Loading cluster: multinode-953539
	I0403 18:55:15.755792   48054 notify.go:220] Checking for updates...
	I0403 18:55:15.756074   48054 config.go:182] Loaded profile config "multinode-953539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 18:55:15.756095   48054 status.go:174] checking status of multinode-953539 ...
	I0403 18:55:15.756471   48054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:55:15.756514   48054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:15.772249   48054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44705
	I0403 18:55:15.772736   48054 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:15.773266   48054 main.go:141] libmachine: Using API Version  1
	I0403 18:55:15.773284   48054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:15.773660   48054 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:15.773873   48054 main.go:141] libmachine: (multinode-953539) Calling .GetState
	I0403 18:55:15.775601   48054 status.go:371] multinode-953539 host status = "Running" (err=<nil>)
	I0403 18:55:15.775619   48054 host.go:66] Checking if "multinode-953539" exists ...
	I0403 18:55:15.775976   48054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:55:15.776022   48054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:15.791219   48054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0403 18:55:15.791577   48054 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:15.792012   48054 main.go:141] libmachine: Using API Version  1
	I0403 18:55:15.792030   48054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:15.792333   48054 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:15.792504   48054 main.go:141] libmachine: (multinode-953539) Calling .GetIP
	I0403 18:55:15.794909   48054 main.go:141] libmachine: (multinode-953539) DBG | domain multinode-953539 has defined MAC address 52:54:00:cf:1c:fe in network mk-multinode-953539
	I0403 18:55:15.795366   48054 main.go:141] libmachine: (multinode-953539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:1c:fe", ip: ""} in network mk-multinode-953539: {Iface:virbr1 ExpiryTime:2025-04-03 19:52:30 +0000 UTC Type:0 Mac:52:54:00:cf:1c:fe Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-953539 Clientid:01:52:54:00:cf:1c:fe}
	I0403 18:55:15.795394   48054 main.go:141] libmachine: (multinode-953539) DBG | domain multinode-953539 has defined IP address 192.168.39.145 and MAC address 52:54:00:cf:1c:fe in network mk-multinode-953539
	I0403 18:55:15.795504   48054 host.go:66] Checking if "multinode-953539" exists ...
	I0403 18:55:15.795803   48054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:55:15.795849   48054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:15.810465   48054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
	I0403 18:55:15.810913   48054 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:15.811309   48054 main.go:141] libmachine: Using API Version  1
	I0403 18:55:15.811333   48054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:15.811646   48054 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:15.811836   48054 main.go:141] libmachine: (multinode-953539) Calling .DriverName
	I0403 18:55:15.812026   48054 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0403 18:55:15.812058   48054 main.go:141] libmachine: (multinode-953539) Calling .GetSSHHostname
	I0403 18:55:15.814430   48054 main.go:141] libmachine: (multinode-953539) DBG | domain multinode-953539 has defined MAC address 52:54:00:cf:1c:fe in network mk-multinode-953539
	I0403 18:55:15.814865   48054 main.go:141] libmachine: (multinode-953539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:1c:fe", ip: ""} in network mk-multinode-953539: {Iface:virbr1 ExpiryTime:2025-04-03 19:52:30 +0000 UTC Type:0 Mac:52:54:00:cf:1c:fe Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-953539 Clientid:01:52:54:00:cf:1c:fe}
	I0403 18:55:15.814895   48054 main.go:141] libmachine: (multinode-953539) DBG | domain multinode-953539 has defined IP address 192.168.39.145 and MAC address 52:54:00:cf:1c:fe in network mk-multinode-953539
	I0403 18:55:15.814983   48054 main.go:141] libmachine: (multinode-953539) Calling .GetSSHPort
	I0403 18:55:15.815125   48054 main.go:141] libmachine: (multinode-953539) Calling .GetSSHKeyPath
	I0403 18:55:15.815264   48054 main.go:141] libmachine: (multinode-953539) Calling .GetSSHUsername
	I0403 18:55:15.815363   48054 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/multinode-953539/id_rsa Username:docker}
	I0403 18:55:15.893797   48054 ssh_runner.go:195] Run: systemctl --version
	I0403 18:55:15.899507   48054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 18:55:15.914020   48054 kubeconfig.go:125] found "multinode-953539" server: "https://192.168.39.145:8443"
	I0403 18:55:15.914052   48054 api_server.go:166] Checking apiserver status ...
	I0403 18:55:15.914089   48054 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 18:55:15.927650   48054 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0403 18:55:15.936859   48054 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0403 18:55:15.936920   48054 ssh_runner.go:195] Run: ls
	I0403 18:55:15.941156   48054 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0403 18:55:15.945205   48054 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0403 18:55:15.945228   48054 status.go:463] multinode-953539 apiserver status = Running (err=<nil>)
	I0403 18:55:15.945239   48054 status.go:176] multinode-953539 status: &{Name:multinode-953539 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:55:15.945268   48054 status.go:174] checking status of multinode-953539-m02 ...
	I0403 18:55:15.945650   48054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:55:15.945692   48054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:15.960746   48054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44385
	I0403 18:55:15.961203   48054 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:15.961630   48054 main.go:141] libmachine: Using API Version  1
	I0403 18:55:15.961647   48054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:15.961919   48054 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:15.962093   48054 main.go:141] libmachine: (multinode-953539-m02) Calling .GetState
	I0403 18:55:15.963744   48054 status.go:371] multinode-953539-m02 host status = "Running" (err=<nil>)
	I0403 18:55:15.963761   48054 host.go:66] Checking if "multinode-953539-m02" exists ...
	I0403 18:55:15.964154   48054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:55:15.964229   48054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:15.979720   48054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I0403 18:55:15.980212   48054 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:15.980652   48054 main.go:141] libmachine: Using API Version  1
	I0403 18:55:15.980676   48054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:15.980979   48054 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:15.981181   48054 main.go:141] libmachine: (multinode-953539-m02) Calling .GetIP
	I0403 18:55:15.984094   48054 main.go:141] libmachine: (multinode-953539-m02) DBG | domain multinode-953539-m02 has defined MAC address 52:54:00:8a:00:56 in network mk-multinode-953539
	I0403 18:55:15.984536   48054 main.go:141] libmachine: (multinode-953539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:00:56", ip: ""} in network mk-multinode-953539: {Iface:virbr1 ExpiryTime:2025-04-03 19:53:29 +0000 UTC Type:0 Mac:52:54:00:8a:00:56 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:multinode-953539-m02 Clientid:01:52:54:00:8a:00:56}
	I0403 18:55:15.984567   48054 main.go:141] libmachine: (multinode-953539-m02) DBG | domain multinode-953539-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:8a:00:56 in network mk-multinode-953539
	I0403 18:55:15.984689   48054 host.go:66] Checking if "multinode-953539-m02" exists ...
	I0403 18:55:15.985092   48054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:55:15.985137   48054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:16.000258   48054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0403 18:55:16.000736   48054 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:16.001233   48054 main.go:141] libmachine: Using API Version  1
	I0403 18:55:16.001252   48054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:16.001604   48054 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:16.001755   48054 main.go:141] libmachine: (multinode-953539-m02) Calling .DriverName
	I0403 18:55:16.001935   48054 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0403 18:55:16.001953   48054 main.go:141] libmachine: (multinode-953539-m02) Calling .GetSSHHostname
	I0403 18:55:16.004519   48054 main.go:141] libmachine: (multinode-953539-m02) DBG | domain multinode-953539-m02 has defined MAC address 52:54:00:8a:00:56 in network mk-multinode-953539
	I0403 18:55:16.004979   48054 main.go:141] libmachine: (multinode-953539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:00:56", ip: ""} in network mk-multinode-953539: {Iface:virbr1 ExpiryTime:2025-04-03 19:53:29 +0000 UTC Type:0 Mac:52:54:00:8a:00:56 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:multinode-953539-m02 Clientid:01:52:54:00:8a:00:56}
	I0403 18:55:16.005002   48054 main.go:141] libmachine: (multinode-953539-m02) DBG | domain multinode-953539-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:8a:00:56 in network mk-multinode-953539
	I0403 18:55:16.005190   48054 main.go:141] libmachine: (multinode-953539-m02) Calling .GetSSHPort
	I0403 18:55:16.005336   48054 main.go:141] libmachine: (multinode-953539-m02) Calling .GetSSHKeyPath
	I0403 18:55:16.005480   48054 main.go:141] libmachine: (multinode-953539-m02) Calling .GetSSHUsername
	I0403 18:55:16.005590   48054 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-14371/.minikube/machines/multinode-953539-m02/id_rsa Username:docker}
	I0403 18:55:16.081646   48054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 18:55:16.095343   48054 status.go:176] multinode-953539-m02 status: &{Name:multinode-953539-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:55:16.095372   48054 status.go:174] checking status of multinode-953539-m03 ...
	I0403 18:55:16.095667   48054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 18:55:16.095704   48054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:16.111232   48054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46347
	I0403 18:55:16.111762   48054 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:16.112246   48054 main.go:141] libmachine: Using API Version  1
	I0403 18:55:16.112274   48054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:16.112661   48054 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:16.112841   48054 main.go:141] libmachine: (multinode-953539-m03) Calling .GetState
	I0403 18:55:16.114405   48054 status.go:371] multinode-953539-m03 host status = "Stopped" (err=<nil>)
	I0403 18:55:16.114420   48054 status.go:384] host is not running, skipping remaining checks
	I0403 18:55:16.114427   48054 status.go:176] multinode-953539-m03 status: &{Name:multinode-953539-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-953539 node start m03 -v=7 --alsologtostderr: (38.71113263s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (338.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-953539
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-953539
E0403 18:57:37.474601   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-953539: (3m2.998310405s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-953539 --wait=true -v=8 --alsologtostderr
E0403 18:59:09.255724   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:59:34.401794   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-953539 --wait=true -v=8 --alsologtostderr: (2m35.290032567s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-953539
--- PASS: TestMultiNode/serial/RestartKeepsNodes (338.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-953539 node delete m03: (2.139884015s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 stop
E0403 19:02:12.324049   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:04:09.260245   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:04:34.405475   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-953539 stop: (3m1.440490724s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-953539 status: exit status 7 (82.772334ms)

                                                
                                                
-- stdout --
	multinode-953539
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-953539-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-953539 status --alsologtostderr: exit status 7 (79.266184ms)

                                                
                                                
-- stdout --
	multinode-953539
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-953539-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 19:04:38.004417   51045 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:04:38.004652   51045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:04:38.004660   51045 out.go:358] Setting ErrFile to fd 2...
	I0403 19:04:38.004664   51045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:04:38.004836   51045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:04:38.004987   51045 out.go:352] Setting JSON to false
	I0403 19:04:38.005028   51045 mustload.go:65] Loading cluster: multinode-953539
	I0403 19:04:38.005110   51045 notify.go:220] Checking for updates...
	I0403 19:04:38.005717   51045 config.go:182] Loaded profile config "multinode-953539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:04:38.005764   51045 status.go:174] checking status of multinode-953539 ...
	I0403 19:04:38.006741   51045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:04:38.006999   51045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:04:38.021773   51045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0403 19:04:38.022179   51045 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:04:38.022690   51045 main.go:141] libmachine: Using API Version  1
	I0403 19:04:38.022713   51045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:04:38.023056   51045 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:04:38.023238   51045 main.go:141] libmachine: (multinode-953539) Calling .GetState
	I0403 19:04:38.024598   51045 status.go:371] multinode-953539 host status = "Stopped" (err=<nil>)
	I0403 19:04:38.024617   51045 status.go:384] host is not running, skipping remaining checks
	I0403 19:04:38.024624   51045 status.go:176] multinode-953539 status: &{Name:multinode-953539 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 19:04:38.024654   51045 status.go:174] checking status of multinode-953539-m02 ...
	I0403 19:04:38.024967   51045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0403 19:04:38.025000   51045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:04:38.039565   51045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I0403 19:04:38.039999   51045 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:04:38.040381   51045 main.go:141] libmachine: Using API Version  1
	I0403 19:04:38.040402   51045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:04:38.040759   51045 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:04:38.040927   51045 main.go:141] libmachine: (multinode-953539-m02) Calling .GetState
	I0403 19:04:38.042199   51045 status.go:371] multinode-953539-m02 host status = "Stopped" (err=<nil>)
	I0403 19:04:38.042208   51045 status.go:384] host is not running, skipping remaining checks
	I0403 19:04:38.042212   51045 status.go:176] multinode-953539-m02 status: &{Name:multinode-953539-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (114.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-953539 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-953539 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.007514482s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-953539 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (114.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-953539
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-953539-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-953539-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (65.561325ms)

                                                
                                                
-- stdout --
	* [multinode-953539-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-953539-m02' is duplicated with machine name 'multinode-953539-m02' in profile 'multinode-953539'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-953539-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-953539-m03 --driver=kvm2  --container-runtime=crio: (45.66015466s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-953539
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-953539: exit status 80 (209.421609ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-953539 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-953539-m03 already exists in multinode-953539-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-953539-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.77s)

                                                
                                    
x
+
TestScheduledStopUnix (116.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-500587 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-500587 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.07534755s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-500587 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-500587 -n scheduled-stop-500587
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-500587 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0403 19:11:40.048970   21552 retry.go:31] will retry after 99.133µs: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.050155   21552 retry.go:31] will retry after 188.019µs: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.051293   21552 retry.go:31] will retry after 278.073µs: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.052411   21552 retry.go:31] will retry after 247.344µs: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.053528   21552 retry.go:31] will retry after 516.736µs: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.054633   21552 retry.go:31] will retry after 828.989µs: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.055758   21552 retry.go:31] will retry after 1.120061ms: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.057959   21552 retry.go:31] will retry after 1.046243ms: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.059094   21552 retry.go:31] will retry after 2.663894ms: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.062291   21552 retry.go:31] will retry after 5.628524ms: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.068496   21552 retry.go:31] will retry after 7.525629ms: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.076738   21552 retry.go:31] will retry after 10.898481ms: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.087984   21552 retry.go:31] will retry after 9.15609ms: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
I0403 19:11:40.098267   21552 retry.go:31] will retry after 28.757698ms: open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/scheduled-stop-500587/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-500587 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-500587 -n scheduled-stop-500587
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-500587
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-500587 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-500587
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-500587: exit status 7 (64.784104ms)

                                                
                                                
-- stdout --
	scheduled-stop-500587
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-500587 -n scheduled-stop-500587
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-500587 -n scheduled-stop-500587: exit status 7 (62.934336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-500587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-500587
--- PASS: TestScheduledStopUnix (116.64s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (221.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2442596341 start -p running-upgrade-520004 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0403 19:14:09.256291   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:14:17.476718   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2442596341 start -p running-upgrade-520004 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.390821108s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-520004 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-520004 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.839227011s)
helpers_test.go:175: Cleaning up "running-upgrade-520004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-520004
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-520004: (1.256745538s)
--- PASS: TestRunningBinaryUpgrade (221.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-514983 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-514983 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (76.49436ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-514983] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-514983 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-514983 --driver=kvm2  --container-runtime=crio: (1m33.815348927s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-514983 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-514983 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0403 19:14:34.401385   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-514983 --no-kubernetes --driver=kvm2  --container-runtime=crio: (16.99925211s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-514983 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-514983 status -o json: exit status 2 (221.666074ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-514983","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-514983
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-514983: (1.016889679s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (25.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-514983 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-514983 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.107856804s)
--- PASS: TestNoKubernetes/serial/Start (25.11s)

                                                
                                    
x
+
TestPause/serial/Start (72.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-942912 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-942912 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m12.495108159s)
--- PASS: TestPause/serial/Start (72.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-514983 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-514983 "sudo systemctl is-active --quiet service kubelet": exit status 1 (192.017208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-514983
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-514983: (1.286279435s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (45.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-514983 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-514983 --driver=kvm2  --container-runtime=crio: (45.144283113s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (45.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-514983 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-514983 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.554058ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.115054503 start -p stopped-upgrade-413283 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.115054503 start -p stopped-upgrade-413283 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (49.879014153s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.115054503 -p stopped-upgrade-413283 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.115054503 -p stopped-upgrade-413283 stop: (1.405380118s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-413283 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-413283 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.337848921s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-999005 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-999005 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (103.859766ms)

                                                
                                                
-- stdout --
	* [false-999005] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 19:17:18.345141   59195 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:17:18.345296   59195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:17:18.345308   59195 out.go:358] Setting ErrFile to fd 2...
	I0403 19:17:18.345314   59195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:17:18.345600   59195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-14371/.minikube/bin
	I0403 19:17:18.346427   59195 out.go:352] Setting JSON to false
	I0403 19:17:18.347756   59195 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7183,"bootTime":1743700655,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:17:18.347844   59195 start.go:139] virtualization: kvm guest
	I0403 19:17:18.349797   59195 out.go:177] * [false-999005] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:17:18.351086   59195 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:17:18.351083   59195 notify.go:220] Checking for updates...
	I0403 19:17:18.352178   59195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:17:18.353275   59195 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-14371/kubeconfig
	I0403 19:17:18.354510   59195 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-14371/.minikube
	I0403 19:17:18.355598   59195 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:17:18.357851   59195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:17:18.359581   59195 config.go:182] Loaded profile config "kubernetes-upgrade-523797": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0403 19:17:18.359714   59195 config.go:182] Loaded profile config "pause-942912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0403 19:17:18.359806   59195 config.go:182] Loaded profile config "stopped-upgrade-413283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0403 19:17:18.359894   59195 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:17:18.395738   59195 out.go:177] * Using the kvm2 driver based on user configuration
	I0403 19:17:18.396917   59195 start.go:297] selected driver: kvm2
	I0403 19:17:18.396932   59195 start.go:901] validating driver "kvm2" against <nil>
	I0403 19:17:18.396944   59195 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:17:18.398641   59195 out.go:201] 
	W0403 19:17:18.399730   59195 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0403 19:17:18.400709   59195 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-999005 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-999005" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-999005" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:17:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.237:8443
name: pause-942912
contexts:
- context:
cluster: pause-942912
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:17:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-942912
name: pause-942912
current-context: pause-942912
kind: Config
preferences: {}
users:
- name: pause-942912
user:
client-certificate: /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/client.crt
client-key: /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-999005

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-999005"

                                                
                                                
----------------------- debugLogs end: false-999005 [took: 4.675345037s] --------------------------------
helpers_test.go:175: Cleaning up "false-999005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-999005
--- PASS: TestNetworkPlugins/group/false (5.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-413283
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-840360 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0403 19:19:34.401370   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-840360 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m5.729109337s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (114.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-409395 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-409395 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m54.730760218s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (114.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-840360 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5d7776e6-51df-4db3-a91e-69587c6a1a74] Pending
helpers_test.go:344: "busybox" [5d7776e6-51df-4db3-a91e-69587c6a1a74] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5d7776e6-51df-4db3-a91e-69587c6a1a74] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004289442s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-840360 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-840360 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-840360 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-840360 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-840360 --alsologtostderr -v=3: (1m30.814126211s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-409395 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6f55f937-f48f-4eb7-ae51-10ed0621a9ca] Pending
helpers_test.go:344: "busybox" [6f55f937-f48f-4eb7-ae51-10ed0621a9ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6f55f937-f48f-4eb7-ae51-10ed0621a9ca] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003098765s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-409395 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-409395 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-409395 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-409395 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-409395 --alsologtostderr -v=3: (1m31.301587267s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-875122 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-875122 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (52.377684676s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-840360 -n embed-certs-840360
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-840360 -n embed-certs-840360: exit status 7 (65.411798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-840360 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (345.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-840360 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-840360 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m45.256476809s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-840360 -n embed-certs-840360
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (345.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-875122 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2a0cd264-d136-405b-be8a-692084b680ad] Pending
helpers_test.go:344: "busybox" [2a0cd264-d136-405b-be8a-692084b680ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2a0cd264-d136-405b-be8a-692084b680ad] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003658853s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-875122 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-875122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-875122 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (90.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-875122 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-875122 --alsologtostderr -v=3: (1m30.798562164s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (90.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-409395 -n no-preload-409395
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-409395 -n no-preload-409395: exit status 7 (68.700786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-409395 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (350.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-409395 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-409395 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m50.461236716s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-409395 -n no-preload-409395
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (350.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-875122 -n default-k8s-diff-port-875122
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-875122 -n default-k8s-diff-port-875122: exit status 7 (61.265687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-875122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-875122 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-875122 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (4m59.488147708s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-875122 -n default-k8s-diff-port-875122
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-471019 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-471019 --alsologtostderr -v=3: (2.292767102s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-471019 -n old-k8s-version-471019: exit status 7 (67.802868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-471019 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qcdkb" [f56f04ff-bb8a-43c4-ad4e-ce706329d625] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qcdkb" [f56f04ff-bb8a-43c4-ad4e-ce706329d625] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.003387328s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qcdkb" [f56f04ff-bb8a-43c4-ad4e-ce706329d625] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004367342s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-840360 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-840360 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-840360 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-840360 -n embed-certs-840360
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-840360 -n embed-certs-840360: exit status 2 (240.059428ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-840360 -n embed-certs-840360
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-840360 -n embed-certs-840360: exit status 2 (246.51454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-840360 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-840360 -n embed-certs-840360
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-840360 -n embed-certs-840360
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-649672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-649672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (49.250882851s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-649672 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-649672 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-649672 --alsologtostderr -v=3: (7.317681987s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xrdmt" [9ce87ce4-30d0-4671-b6c3-35c3a192562d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0403 19:29:09.256357   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/functional-789300/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xrdmt" [9ce87ce4-30d0-4671-b6c3-35c3a192562d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004411741s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-649672 -n newest-cni-649672
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-649672 -n newest-cni-649672: exit status 7 (62.345207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-649672 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-649672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-649672 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (37.773149029s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-649672 -n newest-cni-649672
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xrdmt" [9ce87ce4-30d0-4671-b6c3-35c3a192562d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004275663s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-409395 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-409395 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-409395 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-409395 -n no-preload-409395
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-409395 -n no-preload-409395: exit status 2 (236.077622ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-409395 -n no-preload-409395
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-409395 -n no-preload-409395: exit status 2 (233.15377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-409395 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-409395 -n no-preload-409395
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-409395 -n no-preload-409395
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0403 19:29:34.401271   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m23.40874495s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lnd25" [73309589-4e89-454c-8ba9-af834ecb1814] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007346911s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lnd25" [73309589-4e89-454c-8ba9-af834ecb1814] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004954558s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-875122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-875122 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-875122 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-875122 -n default-k8s-diff-port-875122
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-875122 -n default-k8s-diff-port-875122: exit status 2 (256.553062ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-875122 -n default-k8s-diff-port-875122
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-875122 -n default-k8s-diff-port-875122: exit status 2 (272.597635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-875122 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-875122 -n default-k8s-diff-port-875122
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-875122 -n default-k8s-diff-port-875122
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m13.610311793s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-649672 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-649672 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-649672 -n newest-cni-649672
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-649672 -n newest-cni-649672: exit status 2 (227.275561ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-649672 -n newest-cni-649672
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-649672 -n newest-cni-649672: exit status 2 (252.59215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-649672 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-649672 -n newest-cni-649672
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-649672 -n newest-cni-649672
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (110.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m50.246902671s)
--- PASS: TestNetworkPlugins/group/calico/Start (110.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-999005 "pgrep -a kubelet"
I0403 19:30:54.517470   21552 config.go:182] Loaded profile config "auto-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-999005 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-m6q4q" [11baa149-6830-46ac-ab65-a61bb4fb50b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0403 19:30:57.478624   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/addons-445082/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-m6q4q" [11baa149-6830-46ac-ab65-a61bb4fb50b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004132056s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-s267d" [f9ebf424-3c31-427e-af79-8f85629d80bf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004029431s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-999005 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-999005 "pgrep -a kubelet"
I0403 19:31:10.736316   21552 config.go:182] Loaded profile config "kindnet-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-999005 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jtmkn" [58c03632-caa6-44d2-9d57-7cdcc3e0a2bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jtmkn" [58c03632-caa6-44d2-9d57-7cdcc3e0a2bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004432135s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m12.778071903s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-999005 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0403 19:31:39.415444   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/no-preload-409395/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:31:44.537372   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/no-preload-409395/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m33.895848095s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wprj2" [9cd68761-655b-4de4-a68b-bfe3c0d4ad84] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004527731s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-999005 "pgrep -a kubelet"
I0403 19:31:53.345542   21552 config.go:182] Loaded profile config "calico-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-999005 replace --force -f testdata/netcat-deployment.yaml
E0403 19:31:54.779751   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/no-preload-409395/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:149: (dbg) Done: kubectl --context calico-999005 replace --force -f testdata/netcat-deployment.yaml: (1.445268675s)
I0403 19:31:54.802489   21552 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2ft8r" [df8dbc62-9516-4a9b-80ca-1f438a97ef15] Pending
helpers_test.go:344: "netcat-5d86dc444-2ft8r" [df8dbc62-9516-4a9b-80ca-1f438a97ef15] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2ft8r" [df8dbc62-9516-4a9b-80ca-1f438a97ef15] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004398101s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-999005 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m6.361178313s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-999005 "pgrep -a kubelet"
I0403 19:32:34.624209   21552 config.go:182] Loaded profile config "custom-flannel-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-999005 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lcqll" [8ae8ffa9-efad-4b94-94c3-fd5f3865812f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-lcqll" [8ae8ffa9-efad-4b94-94c3-fd5f3865812f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004010105s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-999005 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0403 19:33:03.944104   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/default-k8s-diff-port-875122/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-999005 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (56.655090801s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-999005 "pgrep -a kubelet"
I0403 19:33:12.445770   21552 config.go:182] Loaded profile config "enable-default-cni-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-999005 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7kw2p" [794bd007-ab5f-4f29-97a7-1ab614ccd4f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0403 19:33:14.186381   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/default-k8s-diff-port-875122/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-7kw2p" [794bd007-ab5f-4f29-97a7-1ab614ccd4f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003419048s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-999005 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7j55m" [78fa42d1-accb-45ab-bcfe-406841a16263] Running
E0403 19:33:34.667861   21552 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/default-k8s-diff-port-875122/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004513655s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-999005 "pgrep -a kubelet"
I0403 19:33:37.885353   21552 config.go:182] Loaded profile config "flannel-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-999005 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-fnqnh" [40675381-0fcd-49b9-aef0-b53e5802405f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-fnqnh" [40675381-0fcd-49b9-aef0-b53e5802405f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003939379s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-999005 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-999005 "pgrep -a kubelet"
I0403 19:33:59.191143   21552 config.go:182] Loaded profile config "bridge-999005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-999005 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9k4lg" [1a0266de-8267-4843-91ea-0259d83ba1bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9k4lg" [1a0266de-8267-4843-91ea-0259d83ba1bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003494459s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-999005 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-999005 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.28
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.02
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
263 TestStartStop/group/disable-driver-mounts 0.18
277 TestNetworkPlugins/group/kubenet 3.21
285 TestNetworkPlugins/group/cilium 3.91
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-445082 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-866534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-866534
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-999005 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-999005" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-999005" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:15:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.237:8443
name: pause-942912
contexts:
- context:
cluster: pause-942912
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:15:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-942912
name: pause-942912
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-942912
user:
client-certificate: /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/client.crt
client-key: /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-999005

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-999005"

                                                
                                                
----------------------- debugLogs end: kubenet-999005 [took: 3.047883542s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-999005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-999005
--- SKIP: TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-999005 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-999005" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20591-14371/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:17:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.237:8443
name: pause-942912
contexts:
- context:
cluster: pause-942912
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:17:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-942912
name: pause-942912
current-context: pause-942912
kind: Config
preferences: {}
users:
- name: pause-942912
user:
client-certificate: /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/client.crt
client-key: /home/jenkins/minikube-integration/20591-14371/.minikube/profiles/pause-942912/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-999005

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-999005" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-999005"

                                                
                                                
----------------------- debugLogs end: cilium-999005 [took: 3.739377438s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-999005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-999005
--- SKIP: TestNetworkPlugins/group/cilium (3.91s)

                                                
                                    
Copied to clipboard